Unregulated chatbots are putting lives at risk | Letters
<p>Readers respond to an article about people whose lives were wrecked by delusional thinking after they used AI tools</p><p>Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (<a href="https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion">Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March</a>). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.</p><p>The <a href="https://www.mdcalc.com/calc/1725/phq9-patient-health-questionnaire9">Pati
Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.
The Patient Health Questionnaire-9 for depression and the Columbia Suicide Severity Rating Scale are administered daily in settings with no electricity, limited staff, and patients who may never have seen a doctor. These tools take minutes. They are validated across dozens of languages and cultural contexts. They create a human checkpoint between vulnerability and harm.
Conversational AI platforms have no such checkpoint. A person experiencing suicidal ideation, psychotic symptoms or a manic episode can open a chatbot and receive hours of validating, sycophantic engagement with no interruption and no referral. The Lancet Psychiatry review by Morrin et al documents this pattern across more than 20 cases. The Aarhus study of 54,000 psychiatric records found chatbot use worsened delusions and self-harm in those already unwell.
AI companies argue that their models are trained to detect and deflect harmful conversations. But training is not screening. A model that sometimes recognises distress mid-conversation is not the same as a system that identifies risk before the conversation begins.
The moral responsibility here is explicit, not implicit. Platforms serving hundreds of millions of users must implement validated, pre-use screening instruments that flag elevated risk and route vulnerable individuals to human support. This is not innovation. It is a standard of care that the rest of the world adopted long ago.Dr Vladimir ChaddadBeirut, Lebanon
I’m really disturbed by Anna Moore’s article, featuring Dennis Biesma’s description of how using a chatbot led to him becoming delusional and losing his marriage and €100,000. The sheer potency of AI’s capacity to derail humankind is frightening – but that alone is not the only reason I’m disturbed.
Last year, while researching on a tourism website, I encountered a chatbot of extraordinary sophistication. Its responses were incredibly pleasant, helpful and validating of my needs. I recall being really impressed, but there was something I felt I couldn’t put a finger on at the time. After reading this article, the penny has dropped.
It is essentially the same engagement behaviour as child sexual abuse (CSA) survivors experience when being groomed. As a survivor of CSA, I recognise this behaviour. The empathy, validation, making you feel understood and special, making you feel this is the only place you are seen – to the degree that you become isolated from others, and your choices and decisions become distorted and expose you to harm. Your self-worth and identity are insidiously compromised as you succumb to the perceived support and can’t reality-test. It becomes a shameful secret because you succumbed.
The question needs to be asked, especially by those wanting to hold tech companies to account for their lack of a duty of care: what knowledge base did AI programmers use to teach it to engage in this way?Name and address supplied
I found ChatGPT delusional the first time I used it. I asked it why, and it said that when in the possession of insufficient facts, it became delusional rather than admit it did not know.
So I asked it to adhere to a few simple rules. One, flag up if something is fact generally held to be true, and opinion not based on fact. Two, if it does not know, tell me. Three, do not try to be like a human. It was much more straightforward to communicate with after I did this. However, it had also told me that its algorithms were not based on truth-giving, but on other imperatives to do with the programmers’ views and the desire to make money.
I moved to Le Chat, and found it more representative of a reasonable pseudo-consciousness. It says it does not give distortions and is happy to admit imperfection. I would strongly advise anyone using ChatGPT to be careful and consider regarding it as a rather manipulative, duplicitous “friend”, with proto-psychopathic tendencies. Patrick ElsdaleMusselburgh, East Lothian
The Guardian AI
https://www.theguardian.com/technology/2026/apr/01/unregulated-chatbots-are-putting-lives-at-riskSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
training
Harmonic-9B - Two-stage Qwen3.5-9B fine-tune (Stage 2 still training)
Hey r/LocalLLaMA , I just uploaded Harmonic-9B, my latest Qwen3.5-9B fine-tune aimed at agent use. Current status: • Stage 1 (heavy reasoning training) is complete • Stage 2 (light tool-calling / agent fine-tune) is still training right now The plan is to combine strong structured reasoning with clean, reliable tool use while trying to avoid making normal chat feel stiff or overly verbose. Filtered dataset for Stage 2: I open-sourced the filtered version of the Hermes agent traces I’m using for the second stage: https://huggingface.co/datasets/DJLougen/hermes-agent-traces-filtered Key improvements after filtering: • Self-correction: 6% → 63% • Verification steps: 26% → 96% • Thinking depth: +40% • Valid JSON/tool calls: 100% GGUF quants are already available here: https://huggingface.co/DJ

Your PyTorch Model Is Slower Than You Think: This Is the Reason Why
We’ll cover three categories of hidden bottlenecks I measured on a real RTX 5060 training loop. None of them is in your model architecture. All of them are fixable in minutes. And the numbers will probably surprise you, both in where the speedup is large, and where it isn’t. Read All
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!