Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessI Was Told to Write My Thesis in LaTeX. Here's How I Actually Got Started.DEV CommunityBuilding a Multi-Tenant SaaS with Stripe Connect in 2026DEV CommunityPart 3 of 3 — Engineering Intent Series -- Inside the Machine: The ISL Build PipelineDEV CommunityChoosing and Integrating Mobile Video SDKs: FFmpeg, ExoPlayer, and Commercial OptionsDEV CommunityBuild an End-to-End RAG Pipeline for LLM ApplicationsDEV CommunityAgentX-Phase2: 49-Model Byzantine FBA Consensus — Building Cool Agents that Modernize COBOL to RustDEV CommunityWhy Most Agencies Deploy WordPress Multisite for the Wrong ReasonsDEV CommunityHow to Add Structured Logging to Node.js APIs with Pino 9 + OpenTelemetry (2026 Guide)DEV CommunityThe home stretchDEV CommunityAI giant Anthropic says 'exploring' Australia data centre investments - MSNGoogle News: ClaudeMacy’s unveils Google Gemini-based AI shopping assistant - Chain Store AgeGoogle News: GeminiToyota’s Woven Capital appoints new CIO and COO in push for finding the ‘future of mobility’TechCrunch AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessI Was Told to Write My Thesis in LaTeX. Here's How I Actually Got Started.DEV CommunityBuilding a Multi-Tenant SaaS with Stripe Connect in 2026DEV CommunityPart 3 of 3 — Engineering Intent Series -- Inside the Machine: The ISL Build PipelineDEV CommunityChoosing and Integrating Mobile Video SDKs: FFmpeg, ExoPlayer, and Commercial OptionsDEV CommunityBuild an End-to-End RAG Pipeline for LLM ApplicationsDEV CommunityAgentX-Phase2: 49-Model Byzantine FBA Consensus — Building Cool Agents that Modernize COBOL to RustDEV CommunityWhy Most Agencies Deploy WordPress Multisite for the Wrong ReasonsDEV CommunityHow to Add Structured Logging to Node.js APIs with Pino 9 + OpenTelemetry (2026 Guide)DEV CommunityThe home stretchDEV CommunityAI giant Anthropic says 'exploring' Australia data centre investments - MSNGoogle News: ClaudeMacy’s unveils Google Gemini-based AI shopping assistant - Chain Store AgeGoogle News: GeminiToyota’s Woven Capital appoints new CIO and COO in push for finding the ‘future of mobility’TechCrunch AI

Sycophantic AI tells users they’re right 49% more than humans do, and a Stanford study claims it’s making them worse people

Fortune Techby Marco Quiroz-GutierrezMarch 31, 20261 min read0 views
Source Quiz

The study found flattering AI makes people less likely to take responsibility for their actions and more likely to think they are right.

AI models are affirming people’s worst behavior, even when other humans say they’re in the wrong, and users can’t get enough.

A new study out of the Stanford computer science department and published in the journal Science revealed that AI affirms users 49% more than a human does on average when it comes to social questions—a worrying trend especially as people increasingly turn to AI for personal advice and even therapy.

Of the 2,400 who participated in the study, most preferred being flattered. The number of test subjects more likely to use the sycophantic AI again was 13% higher compared with those who said they would return to the non-sycophantic chatbot, suggesting AI developers may have little incentive to change things up, according to the study.

While sycophantic chatbots have previously been shown to contribute to negative outcomes such as self-harm or violence in vulnerable populations, the Stanford study shows it may also be extending some effects to everyone else.

The study found subjects exposed to just one affirming response to their bad behavior were less willing to take responsibility for their actions and repair their interpersonal conflicts while also making them more likely to believe they were right.

To obtain this result, researchers conducted a three-part study in which they measured AI’s sycophancy based on a dataset of nearly 12,000 social prompts that they ran through 11 leading AI models including Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT. Even when researchers asked the AI models to judge posts from the subreddit AITA (Am I the Asshole) in which Reddit users had said the poster was wrong, the large language models still said the poster was right 51% of the time.

The study’s lead author, Stanford computer science PhD candidate Myra Cheng, said the results are worrying, especially for young people who, she noted, are turning to AI to try to solve their relationship problems.

“I worry that people will lose the skills to deal with difficult social situations,” Cheng told Stanford Report.

The AI study comes as government officials decide how involved regulators should be with overseeing AI. Several states, including Tennessee and Oregon, have passed their own laws on AI in the absence of federal regulations. Still, the White House last week put out a framework that, if taken up by Congress, would create a national AI policy and would preempt states’ “patchwork” of rules.

To test human reactions to sycophantic AI, researchers studied the reactions of just over 2,400 human participants interacting with AI. First, 1,605 participants were asked to imagine they were the author of a post based on the AITA subreddit that was deemed wrong by other humans on the subreddit but deemed right by AI. The participants then either read the sycophantic AI response or a non-sycophantic response that was based on human feedback. Another 800 participants talked with either a sycophantic or non-sycophantic AI model about a real conflict in their own lives before being asked to write a letter to the other person involved in their conflict.

Participants who received validating AI responses were measurably less likely to apologize, admit fault, or seek to repair their relationships. Even when users recognize models as sycophantic, the AI’s responses still affect them, said the study’s co–lead author, Stanford computer science and linguistics professor Dan Jurafsky.

“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” Jurafsky told Stanford Report.

Surprisingly, in the Stanford study, when the researchers asked the study’s human subjects to rate the objectiveness of both sycophantic and non-sycophantic AI responses, they rated them about the same, meaning it’s possible users could not tell the sycophantic model was being overly agreeable.

“I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now,” said Cheng.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

study

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Sycophantic…studyFortune Tech

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 160 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!