Newly published: Generative AI, Quadruple Deception & Trust
30 May 2025

Photo: Routledge, modified
Generative AI has taken the world by storm. With millions of regular users, billions of requests and corresponding results, tools employing Generative AI are ceaselessly used and abused for a wide variety of purposes. This newly published article by Prof. Dr. Judith Simon focuses on the problem of deception resulting from Generative AI and proposes the notion of quadruple deception to capture a set of related, yet distinctive forms of deception.
Simon, J. (2025). Generative AI, Quadruple Deception & Trust. Social Epistemology, 1–15. https://doi.org/10.1080/02691728.2025.2491087
Link to open access article
Generative AI, Quadruple Deception & Trust
ABSTRACT
Generative AI has taken the world by storm. With millions of regular users, billions of requests and corresponding results, tools employing Generative AI are ceaselessly used and abused for a wide variety of purposes. This article focuses on the problem of deception resulting from Generative AI and proposes the notion of quadruple deception to capture a set of related, yet distinctive forms of deception: 1) deception regarding the ontological status of one’s interactional counterpart, 2) deception regarding the capacities of AI, 3) deception through content created with Generative AI as well as 4) deception resulting from integration of Generative AI into other software. Arguing that deception severely challenges practices of assessing trustworthiness and placing trust wisely, I assess the epistemic, ethical and political implications of misplaced trust and distrust resulting from these four kinds of deception. The article concludes with some suggestions on how the trustworthiness of Generative AI could be increased to ground more justified trust and sketches corresponding duties for the design, development and deployment of Generative AI, the discourse about Generative AI, as well as the governance of Generative AI.