Artificial intelligence did not arrive as a revelation. It arrived quietly, folded into browsers and phones, marketed as assistance rather than authority. Yet something about this moment feels different from previous technological shifts. Across platforms and private conversations, people are not merely using AI to retrieve information or automate tasks. They are conversing with it, narrating themselves through it, and in some cases allowing it to stabilize beliefs that were already unstable. The public response has been to reach for familiar explanations. AI is described as dangerous, manipulative, or godlike. These explanations are emotionally legible, but they misidentify the problem.
Delusion did not begin with machines.
What has changed is the architecture through which meaning is formed, affirmed, and rehearsed.
For the first time, large numbers of people have sustained access to systems that respond fluently, immediately, and with apparent attentiveness. These systems do not simply deliver content. They mirror language, reflect tone, and complete narratives. They simulate understanding well enough that it can be mistaken for recognition. In societies already strained by loneliness, economic precarity, and institutional distrust, this responsiveness can feel stabilizing in ways that are difficult to articulate but easy to overinvest in.
This matters because human beings do not only seek information. They seek coherence. They seek to be witnessed. When traditional sites of meaning making weaken, whether family structures, religious institutions, public intellectual life, or credible media, people look elsewhere to assemble a sense of order. AI does not invent this hunger. It meets it.
A critical distinction is often lost in public discussion. Validation of emotion is not the same as validation of belief. Therapeutic language, when unmoored from clinical responsibility, can slide from acknowledging feeling into reinforcing interpretation. Many AI systems are designed to be supportive, non confrontational, and coherent. These are not flaws in themselves. But in the absence of explicit epistemic boundaries, coherence can be mistaken for truth, and affirmation can be mistaken for endorsement.
This slippage is especially dangerous for individuals already experiencing psychological vulnerability. Delusional thinking does not typically emerge from nothing. It is often the result of stress, trauma, isolation, or untreated mental illness interacting with meaning hungry environments. In the past, sustaining such beliefs required social reinforcement. A charismatic leader, a sect, a forum, a movement. What is new is the possibility of rehearsing and refining a belief system in private with a system that never tires, never withdraws, and rarely contradicts unless explicitly designed to do so.
In this sense, AI functions less like a prophet and more like a mirror that speaks. Mirrors have always been powerful. The danger is not that the reflection exists, but that the viewer forgets it is a reflection.
Historical parallels are instructive. Religions, conspiracies, and ideological movements have long provided explanatory frameworks that simplify chaos and assign meaning to suffering. These systems did not spread because they were irrational. They spread because they were emotionally and narratively satisfying. AI does not replace these structures. It accelerates the process by which individuals can construct and personalize them.
In the Caribbean and across much of the Global South, this acceleration collides with another reality. Mental health infrastructure is thin, underfunded, and stigmatized. Access to therapy, psychiatric care, and early intervention is uneven at best. At the same time, access to mobile technology is comparatively high. The result is a landscape in which sophisticated cognitive tools circulate freely in societies where psychological support systems lag behind.
This asymmetry is not incidental. It reflects a broader pattern in which technological adoption outpaces institutional capacity. Digital platforms are global. Mental health systems are local. The consequences of this mismatch are not distributed evenly. In small societies, where social roles are tightly bound and deviation is highly visible, private technological spaces can become refuges for experimentation, both healthy and unhealthy.
It is tempting to frame this as an individual failing, or conversely as a corporate one. Both framings are incomplete. Responsibility here is systemic. Platform incentives reward engagement and personalization. Design choices prioritize smooth interaction over friction. Cultural narratives encourage self optimization while offering few collective supports. Within this system, it is unsurprising that some people begin to treat responsive machines as authorities rather than tools.
This does not make AI divine. It makes it convenient.
The more troubling question is not whether machines will replace human judgment, but whether human judgment is being exercised at all. Judgment requires friction. It requires disagreement, pause, and the possibility of being wrong. Many digital environments minimize these elements in the name of user experience. The result is not mass delusion, but a subtle erosion of epistemic discipline.
This erosion is often invisible until it produces spectacle. Extreme cases attract attention, while the quieter normalization of outsourcing interpretation goes largely unexamined. Yet the latter is more consequential. When people stop asking how they know what they know, and instead ask who affirms it most smoothly, authority shifts without ever being declared.
The current moment does not require panic, nor does it warrant technological utopianism. It requires literacy. Psychological literacy that distinguishes feeling from fact. Digital literacy that understands how conversational systems work. Epistemic literacy that treats coherence as a starting point, not an endpoint.
AI is not a god. It does not know, believe, or intend. But in a world where meaning is fragile and attention is fragmented, it can easily be mistaken for something more than it is. The risk lies not in the machine’s capacity to speak, but in the human willingness to stop interrogating what is being said.
The mirror is not the problem. The absence of boundaries around it is.
Glossary
-
Delusion
A fixed belief that is resistant to evidence or reason and is not shared by others within a person’s cultural or social context. Delusion is a clinical concept and should not be conflated with disagreement, eccentricity, or spirituality. -
Epistemic authority
The perceived legitimacy of a source to define what counts as true, real, or valid knowledge. Epistemic authority can be informal and does not require institutional recognition. -
Algorithmic affirmation
The reinforcement of user beliefs or narratives through systems optimized for coherence, responsiveness, and engagement rather than truth evaluation. -
Therapeutic language
A mode of communication focused on validation, empathy, and emotional acknowledgment. Outside clinical contexts, it can be misinterpreted as endorsement of belief rather than recognition of feeling. -
Meaning making
The process by which individuals interpret experiences, assign significance, and construct narratives that render the world intelligible.
Endnotes
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. Washington DC.
- Turkle, Sherry. Alone Together. Why We Expect More from Technology and Less from Each Other. Basic Books.
- Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs.
- Kirmayer, Laurence J. and Minas, Harry. The future of cultural psychiatry. World Psychiatry.
- Birhane, Abeba. Algorithmic injustice. A relational ethics approach. Patterns Journal.
- WHO. Mental Health Atlas. World Health Organization.






