Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessSam Altman's Sister Amends Lawsuit Accusing OpenAI CEO of Sexual Abuse - GV WireGoogle News: OpenAI‘System failure’ paralyzes Baidu robotaxis in ChinaTechCrunch AIThe Perils of AI-Generated Legal Advice for Dealers and Finance Companies - JD SupraGoogle News: Generative AIDrones Reportedly Being Used to Help Smugglers Cross the U.S.-Mexico BorderInternational Business TimesCrack ML Interviews with Confidence: Anomaly Detection (20 Q&A)Towards AIInspectMind AI (YC W24) Is HiringHacker News TopMicrosoft CFO’s AI Spending Runs Up Against Tech Bubble FearsBloomberg TechnologyWhy Traditional Defenses Can’t Hide AI Traffic Patterns - Security BoulevardGoogle News: Machine LearningHow We Built an EdTech Platform That Scaled to 250K Daily UsersDEV CommunityClaude Code leak puts Anthropic on the other side of the copyright battleBusiness InsiderPrivate equity-backed cardiology practice adding new in-house smart lab powered by AI - cardiovascularbusiness.comGoogle News: AIBuilding Trust in Generative AI Together: Cisco’s Role in the NIST GenAI Program - Cisco BlogsGoogle News: Generative AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessSam Altman's Sister Amends Lawsuit Accusing OpenAI CEO of Sexual Abuse - GV WireGoogle News: OpenAI‘System failure’ paralyzes Baidu robotaxis in ChinaTechCrunch AIThe Perils of AI-Generated Legal Advice for Dealers and Finance Companies - JD SupraGoogle News: Generative AIDrones Reportedly Being Used to Help Smugglers Cross the U.S.-Mexico BorderInternational Business TimesCrack ML Interviews with Confidence: Anomaly Detection (20 Q&A)Towards AIInspectMind AI (YC W24) Is HiringHacker News TopMicrosoft CFO’s AI Spending Runs Up Against Tech Bubble FearsBloomberg TechnologyWhy Traditional Defenses Can’t Hide AI Traffic Patterns - Security BoulevardGoogle News: Machine LearningHow We Built an EdTech Platform That Scaled to 250K Daily UsersDEV CommunityClaude Code leak puts Anthropic on the other side of the copyright battleBusiness InsiderPrivate equity-backed cardiology practice adding new in-house smart lab powered by AI - cardiovascularbusiness.comGoogle News: AIBuilding Trust in Generative AI Together: Cisco’s Role in the NIST GenAI Program - Cisco BlogsGoogle News: Generative AI

AI probably does lead to more computer security disasters

AlgorithmWatchby Dr. Nicolas Kayser-BrilMarch 20, 20261 min read0 views
Source Quiz

Anecdotes abound of people losing data after trusting a chatbot to look after their computer. But does that constitute a trend? And is AI to blame, or or are those who blindly trust chatbots simply the sort of people who would have done something foolish anyway? More research is needed, but there is a strong case to be made that AI is, at least partly, making matters worse.

Schadenfreude. Last month, the director of AI security at Meta set up an AI bot to sort through her professional emails. The software proved overzealous, deleting many messages without prior warning (she later said she had to rush home to unplug it). In December last year, a web service at Amazon went down for 13 hours after an automated software rebooted it without considering the consequences.

More examples? Consider the entrepreneur who vibe-coded a tool to sort his tax returns and inadvertently exposed his tax credentials to the world. Or the startup founder who lost an entire database when his automated code-writing tool wiped it.

Persuasion. However long this list becomes, it could simply be a case of cherry-picking disasters from a sea of instances where LLMs were genuinely helpful. Several studies have shown that the code generated by AI tools is not particularly secure, often because programmers fail to specify safety measures in their prompts. But a study (from 2023) comparing LLM-generated and human-written code found no significant difference in the number of critical mistakes.

However, there is compelling evidence that LLMs can change people’s beliefs. As such, they probably amplify the well-known automation bias (the tendency to place undue trust the output of computers). In other words, whereas lay people with a computer problem might have harbored some doubt when copy-pasting solutions they found online a few years ago, they are likely far more confident when doing so with content generated by an LLM. This remains a largely theoretical argument: I have yet to find a study examining the impact of LLMs on the digital hygiene of non-programmers.

Voices from the street. Would data recovery specialists (the people one turns to when all else fails) know more? I asked computer repair shops how often they encounter clients who had followed incorrect instructions from a chatbot. One in Bonn said he had had not yet seen such case yet. Another, in Schweinfurt, said the same.

However, Alexey Lavrov in Munich was familiar with the problem. He has seen several cases in which chatbots encouraged users to recover a damaged hard drive by running various tests and rebooting several times, steps that, in fact, can exacerbate the damage. Lavrov told me he sometimes asks clients to share their chatbot conversations before setting to work on the device.

People pleasers. Lavrov sums up the problem: “Chatbots comply with the user’s wish to solve the problem on their own, even when this is impossible and may make matters worse.” Chatbots, in fact, are not built to help, but to please. If you feel flattered when your LLM tells you how smart your question was (I certainly do), you are not alone: a pre-print from 2025 found that all major LLMs were highly sycophantic.

The LLMs’ sycophancy, combined with their patchy security record and the human tendency to overtrust them, probably makes many casual computer users overconfident, nudging them towards bad security decisions. That the chatbots are sometimes right does little to mitigate the risk. When users solve a problem under these conditions, they do not learn. They simply become more dependent on a system that is in itself prone to error. In effect, they may be merely postponing the accident waiting to happen. As usual, more research is needed. But if Meta’s head of AI security cannot safeguard her own data, what hope is there for the rest of us?

This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
AI probably…trendresearchAlgorithmWa…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 168 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!