Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessVCSU Hosting Free Public Lecture on (AI) Artificial Intelligence - newsdakota.comGoogle News: AI[D] KDD Review DiscussionReddit r/MachineLearningI Built an MCP Server That Understands Your MSBuild Project Graph — Before You BuildDEV CommunityGemma 4 31B beats several frontier models on the FoodTruck BenchReddit r/LocalLLaMA1 Artificial Intelligence (AI) Stock That Could Be Worth a Fortune by 2030 - finance.yahoo.comGoogle News: AI1 Artificial Intelligence (AI) Stock That Could Be Worth a Fortune by 2030 - fool.comGoogle News: AIAgent Middleware in Microsoft Agent Framework 1.0DEV Communityکود کشاورزی — Complete GuideDEV CommunityHow I Track My AI Spending as a Solo Dev (Without Going Broke)DEV CommunityWe Shipped an AI Song Generator. The Hardest Part Wasn't the AI.DEV CommunityPOTS explained: The disorder that forced OpenAI exec Fidji Simo to take medical leaveBusiness InsiderWhat is POTS, the disorder that forced OpenAI exec Fidji Simo to take medical leave - Business InsiderGoogle News: OpenAIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessVCSU Hosting Free Public Lecture on (AI) Artificial Intelligence - newsdakota.comGoogle News: AI[D] KDD Review DiscussionReddit r/MachineLearningI Built an MCP Server That Understands Your MSBuild Project Graph — Before You BuildDEV CommunityGemma 4 31B beats several frontier models on the FoodTruck BenchReddit r/LocalLLaMA1 Artificial Intelligence (AI) Stock That Could Be Worth a Fortune by 2030 - finance.yahoo.comGoogle News: AI1 Artificial Intelligence (AI) Stock That Could Be Worth a Fortune by 2030 - fool.comGoogle News: AIAgent Middleware in Microsoft Agent Framework 1.0DEV Communityکود کشاورزی — Complete GuideDEV CommunityHow I Track My AI Spending as a Solo Dev (Without Going Broke)DEV CommunityWe Shipped an AI Song Generator. The Hardest Part Wasn't the AI.DEV CommunityPOTS explained: The disorder that forced OpenAI exec Fidji Simo to take medical leaveBusiness InsiderWhat is POTS, the disorder that forced OpenAI exec Fidji Simo to take medical leave - Business InsiderGoogle News: OpenAI
AI NEWS HUBbyEIGENVECTOREigenvector

Unregulated chatbots are putting lives at risk | Letters

The Guardian AIby Guardian staff reporterApril 1, 20264 min read1 views
Source Quiz

<p>Readers respond to an article about people whose lives were wrecked by delusional thinking after they used AI tools</p><p>Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (<a href="https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion">Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March</a>). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.</p><p>The <a href="https://www.mdcalc.com/calc/1725/phq9-patient-health-questionnaire9">Pati

Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.

The Patient Health Questionnaire-9 for depression and the Columbia Suicide Severity Rating Scale are administered daily in settings with no electricity, limited staff, and patients who may never have seen a doctor. These tools take minutes. They are validated across dozens of languages and cultural contexts. They create a human checkpoint between vulnerability and harm.

Conversational AI platforms have no such checkpoint. A person experiencing suicidal ideation, psychotic symptoms or a manic episode can open a chatbot and receive hours of validating, sycophantic engagement with no interruption and no referral. The Lancet Psychiatry review by Morrin et al documents this pattern across more than 20 cases. The Aarhus study of 54,000 psychiatric records found chatbot use worsened delusions and self-harm in those already unwell.

AI companies argue that their models are trained to detect and deflect harmful conversations. But training is not screening. A model that sometimes recognises distress mid-conversation is not the same as a system that identifies risk before the conversation begins.

The moral responsibility here is explicit, not implicit. Platforms serving hundreds of millions of users must implement validated, pre-use screening instruments that flag elevated risk and route vulnerable individuals to human support. This is not innovation. It is a standard of care that the rest of the world adopted long ago.Dr Vladimir ChaddadBeirut, Lebanon

I’m really disturbed by Anna Moore’s article, featuring Dennis Biesma’s description of how using a chatbot led to him becoming delusional and losing his marriage and €100,000. The sheer potency of AI’s capacity to derail humankind is frightening – but that alone is not the only reason I’m disturbed.

Last year, while researching on a tourism website, I encountered a chatbot of extraordinary sophistication. Its responses were incredibly pleasant, helpful and validating of my needs. I recall being really impressed, but there was something I felt I couldn’t put a finger on at the time. After reading this article, the penny has dropped.

It is essentially the same engagement behaviour as child sexual abuse (CSA) survivors experience when being groomed. As a survivor of CSA, I recognise this behaviour. The empathy, validation, making you feel understood and special, making you feel this is the only place you are seen – to the degree that you become isolated from others, and your choices and decisions become distorted and expose you to harm. Your self-worth and identity are insidiously compromised as you succumb to the perceived support and can’t reality-test. It becomes a shameful secret because you succumbed.

The question needs to be asked, especially by those wanting to hold tech companies to account for their lack of a duty of care: what knowledge base did AI programmers use to teach it to engage in this way?Name and address supplied

I found ChatGPT delusional the first time I used it. I asked it why, and it said that when in the possession of insufficient facts, it became delusional rather than admit it did not know.

So I asked it to adhere to a few simple rules. One, flag up if something is fact generally held to be true, and opinion not based on fact. Two, if it does not know, tell me. Three, do not try to be like a human. It was much more straightforward to communicate with after I did this. However, it had also told me that its algorithms were not based on truth-giving, but on other imperatives to do with the programmers’ views and the desire to make money.

I moved to Le Chat, and found it more representative of a reasonable pseudo-consciousness. It says it does not give distortions and is happy to admit imperfection. I would strongly advise anyone using ChatGPT to be careful and consider regarding it as a rather manipulative, duplicitous “friend”, with proto-psychopathic tendencies. Patrick ElsdaleMusselburgh, East Lothian

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

training

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Unregulated…trainingThe Guardia…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 152 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!