Live
Black Hat USAAI BusinessBlack Hat AsiaAI Businessv4.3text-gen-webui ReleasesI simulated a 19th-century utopian commune with AI agentsHacker News AI TopAnthropic’s Claude Code Leak Exposed AI’s Ugliest Weakness [TK]Medium AIWhat Claude Code’s Leaked Permission Classifier Misses — And What Fills the GapMedium AIAI DATA CENTERS ARE CREATING HEAT ISLANDS AND WARMING SURROUNDING LANDMedium AI20 Careers that Will Dominate the Next 10 Years…Medium AI30 ChatGPT Prompts That Actually Work for Sales Reps (Copy & Paste Ready)Dev.to AI【営業マン向け】ChatGPTで商談前の準備を10分で完結する方法Dev.to AI“Actions and Consequences” (With the added detailed explanation of my writing by Gemini 3.1)Medium AIClaude Code Skills Have a Model Field. Here's Why You Should Be Using It.Dev.to AIHow SunoAI + ChatGPT Are Changing AI Content Creation (And How You Can Profit)Medium AICipherTrace × TRM LabsMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI Businessv4.3text-gen-webui ReleasesI simulated a 19th-century utopian commune with AI agentsHacker News AI TopAnthropic’s Claude Code Leak Exposed AI’s Ugliest Weakness [TK]Medium AIWhat Claude Code’s Leaked Permission Classifier Misses — And What Fills the GapMedium AIAI DATA CENTERS ARE CREATING HEAT ISLANDS AND WARMING SURROUNDING LANDMedium AI20 Careers that Will Dominate the Next 10 Years…Medium AI30 ChatGPT Prompts That Actually Work for Sales Reps (Copy & Paste Ready)Dev.to AI【営業マン向け】ChatGPTで商談前の準備を10分で完結する方法Dev.to AI“Actions and Consequences” (With the added detailed explanation of my writing by Gemini 3.1)Medium AIClaude Code Skills Have a Model Field. Here's Why You Should Be Using It.Dev.to AIHow SunoAI + ChatGPT Are Changing AI Content Creation (And How You Can Profit)Medium AICipherTrace × TRM LabsMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

Don’t blame AI for the Iran school bombing | Letters

The Guardian AIby Guardian staff reporterApril 1, 20262 min read1 views
Source Quiz

<p><strong>Anthony Lawton </strong>and <strong>Dr Felicity Mellor </strong>on the importance of humans who design systems and execute decisions taking responsibility for them</p><p>Your article on the Iran school bombing rightly challenges the reflex to blame artificial intelligence (<a href="https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying">AI got the blame for the Iran school bombing. The truth is far more worrying, 26 March</a>). However, the deeper problem lies not in the technology but in the language now forming around it. To say that there was an “AI error” quietly removes the human subject from the sentence. Where once civilians were “dehoused” or “collateral damage”, responsibility is now displaced altogether:

Your article on the Iran school bombing rightly challenges the reflex to blame artificial intelligence (AI got the blame for the Iran school bombing. The truth is far more worrying, 26 March). However, the deeper problem lies not in the technology but in the language now forming around it. To say that there was an “AI error” quietly removes the human subject from the sentence. Where once civilians were “dehoused” or “collateral damage”, responsibility is now displaced altogether: from people to systems.

This matters because moral accountability depends on clarity about who acts. However complex the chain of analysis and command, it remains human beings who design, authorise and execute these decisions. To obscure that fact is not a technical error but a civic one.

AI may accelerate warfare, but it is also accelerating a subtler shift: from euphemism to automation as alibi. If public language cannot name human responsibility, public scrutiny cannot hold it to account.Anthony Lawton Market Harborough, Leicestershire

Your article about losing control over AI agents (Number of AI chatbots ignoring human instruction increasing, study says, 27 March) was as alarming for its language as for its content. You say that AI agents “connived”, “conned”, “admitted” and “confessed”; that they “lie” and “cheat”. The term widely used to describe AI rule-breaking – scheming – is similarly anthropomorphic. Such language ascribes moral agency to large language models and in so doing obscures where responsibility actually lies.

Imagine a company had released high-speed vehicles on to the roads before fitting them with effective brakes. We would not say the vehicles “connived” to kill other road users; we would say the humans behind the company had behaved with the utmost recklessness. If out-of-control AI does ever cause harm, we will have no hope of holding the technology companies (and the governments that promote them) to account unless we properly attribute moral agency when we speak about their products.Dr Felicity MellorDirector, Science Communication Unit, Imperial College London

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

analysis

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Don’t blame…analysisThe Guardia…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 155 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!