AI probably does lead to more computer security disasters
Anecdotes abound of people losing data after trusting a chatbot to look after their computer. But does that constitute a trend? And is AI to blame, or or are those who blindly trust chatbots simply the sort of people who would have done something foolish anyway? More research is needed, but there is a strong case to be made that AI is, at least partly, making matters worse.
Schadenfreude. Last month, the director of AI security at Meta set up an AI bot to sort through her professional emails. The software proved overzealous, deleting many messages without prior warning (she later said she had to rush home to unplug it). In December last year, a web service at Amazon went down for 13 hours after an automated software rebooted it without considering the consequences.
More examples? Consider the entrepreneur who vibe-coded a tool to sort his tax returns and inadvertently exposed his tax credentials to the world. Or the startup founder who lost an entire database when his automated code-writing tool wiped it.
Persuasion. However long this list becomes, it could simply be a case of cherry-picking disasters from a sea of instances where LLMs were genuinely helpful. Several studies have shown that the code generated by AI tools is not particularly secure, often because programmers fail to specify safety measures in their prompts. But a study (from 2023) comparing LLM-generated and human-written code found no significant difference in the number of critical mistakes.
However, there is compelling evidence that LLMs can change people’s beliefs. As such, they probably amplify the well-known automation bias (the tendency to place undue trust the output of computers). In other words, whereas lay people with a computer problem might have harbored some doubt when copy-pasting solutions they found online a few years ago, they are likely far more confident when doing so with content generated by an LLM. This remains a largely theoretical argument: I have yet to find a study examining the impact of LLMs on the digital hygiene of non-programmers.
Voices from the street. Would data recovery specialists (the people one turns to when all else fails) know more? I asked computer repair shops how often they encounter clients who had followed incorrect instructions from a chatbot. One in Bonn said he had had not yet seen such case yet. Another, in Schweinfurt, said the same.
However, Alexey Lavrov in Munich was familiar with the problem. He has seen several cases in which chatbots encouraged users to recover a damaged hard drive by running various tests and rebooting several times, steps that, in fact, can exacerbate the damage. Lavrov told me he sometimes asks clients to share their chatbot conversations before setting to work on the device.
People pleasers. Lavrov sums up the problem: “Chatbots comply with the user’s wish to solve the problem on their own, even when this is impossible and may make matters worse.” Chatbots, in fact, are not built to help, but to please. If you feel flattered when your LLM tells you how smart your question was (I certainly do), you are not alone: a pre-print from 2025 found that all major LLMs were highly sycophantic.
The LLMs’ sycophancy, combined with their patchy security record and the human tendency to overtrust them, probably makes many casual computer users overconfident, nudging them towards bad security decisions. That the chatbots are sometimes right does little to mitigate the risk. When users solve a problem under these conditions, they do not learn. They simply become more dependent on a system that is in itself prone to error. In effect, they may be merely postponing the accident waiting to happen. As usual, more research is needed. But if Meta’s head of AI security cannot safeguard her own data, what hope is there for the rest of us?
This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.
AlgorithmWatch
https://algorithmwatch.org/en/ai-probably-does-lead-to-more-computer-security-disasters/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
trendresearchExclusive: Longtime Google DeepMind researcher David Silver leaves to found his own AI startup - fortune.com
<a href="https://news.google.com/rss/articles/CBMiwgFBVV95cUxNb3Z5ZnVqZDd2NzFYNG1CTmJnc2V2RlZpa01yQ2Rld29IeUV2d3BBZUJqMFBpdWxEY05SQ24wX25uS1hEcmpMUjFsUTU5YjhuYjFCRmJPeTJzM3JtMTRoR0hlaGI3dWt1b1B3b05COXloOC1IRU1Wc0hwY3hTVXA4OEgzajdZNXREUTBrWXdQUm9fUG1WMUpaZTI1azNpN1pPa2dfeVRncmNRRjNEajktN3JVcVZNdkUzS3BjYUMzUDVuZw?oc=5" target="_blank">Exclusive: Longtime Google DeepMind researcher David Silver leaves to found his own AI startup</a> <font color="#6f6f6f">fortune.com</font>
New Research Directions in Materials Science with AI - Bioengineer.org
<a href="https://news.google.com/rss/articles/CBMigwFBVV95cUxNd0wwNmxWampidEpMUC1lUEhkZU9WanN4OHVSSENtalZ0dm5OWkx3Ull2RktEaXJiVkUzRVNxbzMyNlRTcER3eW9YWXZWclhNTE1CcTFkVWdqOTZjdFhDUGxwbjVKWFpwUi14ZllfU19PN25KNlBQSzRicFdCdzdqQ3h2cw?oc=5" target="_blank">New Research Directions in Materials Science with AI</a> <font color="#6f6f6f">Bioengineer.org</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!