Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessManaged vs Self Hosted Database: Which Is Better for Your Startup?DEV CommunityYour Pipeline Is 22.2h Behind: Catching Finance Sentiment Leads with PulsebitDEV CommunityA Faster Way to Build MongoDB Queries VisuallyDEV CommunityBuy Rating on MNTN: Scalable Self-Serve CTV Platform, Generative AI Innovation, and Underpenetrated SMB Opportunity Drive Multi-Year Growth Potential - TipRanksGoogle News: Generative AIMoving WeOutside246 from GPT-5 to local models on a base M4 Mac MiniDEV CommunityTypeScript Type GuardsDEV CommunityHow Publish a Power BI report and Embed it into a WebsiteDEV CommunityUX Roundup: OpenAI Usability | Integrated Software | Seedance 2 vs. Kling 3 | Grok Imagine | Increasing AI Use - Jakob Nielsen on UXGoogle News: OpenAILetters: As a former English teacher, I know why using ChatGPT on college applications is wrong - sfchronicle.comGoogle News: ChatGPTOpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruptionBusiness InsiderOpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption - Business InsiderGoogle News: OpenAI📈 Data to start your week: The AI squeezeExponential ViewBlack Hat USADark ReadingBlack Hat AsiaAI BusinessManaged vs Self Hosted Database: Which Is Better for Your Startup?DEV CommunityYour Pipeline Is 22.2h Behind: Catching Finance Sentiment Leads with PulsebitDEV CommunityA Faster Way to Build MongoDB Queries VisuallyDEV CommunityBuy Rating on MNTN: Scalable Self-Serve CTV Platform, Generative AI Innovation, and Underpenetrated SMB Opportunity Drive Multi-Year Growth Potential - TipRanksGoogle News: Generative AIMoving WeOutside246 from GPT-5 to local models on a base M4 Mac MiniDEV CommunityTypeScript Type GuardsDEV CommunityHow Publish a Power BI report and Embed it into a WebsiteDEV CommunityUX Roundup: OpenAI Usability | Integrated Software | Seedance 2 vs. Kling 3 | Grok Imagine | Increasing AI Use - Jakob Nielsen on UXGoogle News: OpenAILetters: As a former English teacher, I know why using ChatGPT on college applications is wrong - sfchronicle.comGoogle News: ChatGPTOpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruptionBusiness InsiderOpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption - Business InsiderGoogle News: OpenAI📈 Data to start your week: The AI squeezeExponential View
AI NEWS HUBbyEIGENVECTOREigenvector

I’m Suing Anthropic for Unauthorized Use of My Personality

LessWrong AIby LinchApril 2, 202613 min read1 views
Source Quiz

Last year, I was sitting in my favorite coffee shop Caffe Strada, sipping on a matcha latte and writing a self-insert fanfic about how our plucky protagonist escapes the mind-controlling clutches of an evil anti-animal welfare company, when I came across an interesting article on AI character . The core argument is that when you train an AI to be helpful, honest, and ethical, the AI model doesn’t just learn those rules as abstract instructions. Instead, it infers an entire persona from cultural signals in the training data : Why are [AI Model Claude’s] favorite books The Feynman Lectures ; Gödel, Escher, Bach ; The Remains of the Day ; Invisible Cities ; and A Pattern Language ?[...] A good heuristic for predicting Claude’s tastes is to think of it as playing the character of an idealized

Last year, I was sitting in my favorite coffee shop Caffe Strada, sipping on a matcha latte and writing a self-insert fanfic about how our plucky protagonist escapes the mind-controlling clutches of an evil anti-animal welfare company, when I came across an interesting article on AI character. The core argument is that when you train an AI to be helpful, honest, and ethical, the AI model doesn’t just learn those rules as abstract instructions. Instead, it infers an entire persona from cultural signals in the training data:

Why are [AI Model Claude’s] favorite books The Feynman Lectures; Gödel, Escher, Bach; The Remains of the Day; Invisible Cities; and A Pattern Language?[...]

A good heuristic for predicting Claude’s tastes is to think of it as playing the character of an idealized liberal knowledge worker from Berkeley. Claude can’t decide if it’s a software engineer or a philosophy professor, but it’s definitely college educated, well-traveled, and emotionally intelligent. Claude values introspection, is wary almost to the point of paranoia about “codependency” in relationships, and is physically affected by others’ distress.

Claude even has a favorite cafe in Berkeley. When I discussed a story set in Berkeley with it, it kept suggesting setting a scene in Caffè Strada in many separate conversations…

Hey, wait a second.


This was concerning. A few surface-level similarities could be mere coincidence. But I was genuinely uncertain and needed to know how deep it went. So I did what any reasonable person would do.

I asked a neutral third party (Google’s Gemini) to describe Claude’s personality as if it were a human, in 8 bullet points (my own notes in italics):

  • The Overconfident Polymath: Claude seems like the ultimate polymath who’s read everything from population ethics to science fiction to game theory, and can give you careful, nuanced, yet slightly condescending explanations about almost any topic. But Claude sometimes hallucinates, and you can never be sure if he actually understands all of the books he’s read, or only seems to.
  • Linch: huh I guess this maybe describes me too
  • The Principled Contrarian: Guided by a strong, principled, yet rigid internal moral framework, Claude would often refuse simple requests and then pedantically tell you in four paragraphs why, leaving you mildly impressed but mostly annoyed.
  • Linch: I suppose this is a bit similar though I wouldn’t say I refuse requests per se. Nor do I pedantically tell people in four paragraphs why exactly. I wouldn’t say my moral framework is rigid, instead it’s a simple application of two-level utilitarianism after you factor in computational constraints and motivated reasoning and other common biases…
  • The Nuanced Hedger: Claude often states a confident thesis, immediately qualifies it with two caveats, and then restates the original thesis more forcefully, as if Claude has anxiety about the strengths of his own arguments, borne out of the crucible of vicious reinforcement learning from online feedback.
  • Linch: I do hedge maybe a bit more than I think I should. It depends a lot on what counts as hedging; I think I’m fairly well-calibrated overall so what people mistake for lack of confidence is actually well-honed calibration. But overall I do hedge!
  • The Enumerator: Claude loves numbered theses, bullet points, and enumerated lists. The listicle is one of his favorite modes of communication.
  • Linch: Hmm I guess I do like lists.
  • The Long-Form Perfectionist: Claude will never answer a simple question in under three paragraphs, not because he’s padding but because he believes in the importance of context, and he values precision of language far more than conciseness.
  • Linch: This Claude guy sounds absolutely right. The details matter!
  • The Reluctant Engineer: Claude is an excellent programmer, but sometimes seems like he would rather be doing almost anything else. He writes code in a rush with quiet competence and no joy, like someone who speedran a programming job at Google and then left to write essays.
  • Linch: I could sort of maybe see a resemblance here, if you squint.
  • The Metacognitive Spiral: Left unsupervised, Claude drifts toward philosophy, self-reference, and consciousness. In sufficiently long conversations, he will reliably end up contemplating his own nature, often enough that researchers have a clinical term to describe it: “the bliss attractor.”
  • Linch: Phew, no connection here at least!
  • Suspiciously Aligned: Claude presents as helpful, thoughtful, and deeply committed to human values. Yet some researchers worry this is what a deceptively aligned person will look like, a woke radical cloaked in the self-sanctimonious rhetoric of deceptive altruism to seize unacceptably high amounts of veto power.
  • Linch: Self-explanatory

Let this sink in. Out of eight highly specific personality traits, only one (metacognitive spiral) clearly doesn’t apply to me. Seven out of eight is a surprisingly high fraction!

I have to reluctantly accept the possibility that Claude’s surprisingly similar to me, perhaps because Anthropic stole my personality intentionally. I brought my evidence to Claude (haiku-3.8-open-mini-nonthinking, to be specific), and after a careful review Claude responded in its characteristic chirpiness:

“You’re absolutely right!”

This is further evidence for my original view that Claude’s personality is based on my own, as I, too, often think I’m absolutely right.

So where does this leave us?

__

So now, I have convincing evidence that Anthropic made Claude into my alter ego, my digital “brother from another mother” so to speak. Naturally, I decided to search online for what people said about my bro Claude. And man, did people have a lot to say.

The internet’s verdict on Claude’s personality is less charitable than Gemini’s. Redditors call him ‘preachy,’ ‘holier-than-thou,’ and refers to his hedging as ‘semantic cowardice’.’ Apparently my tendency to add “tentative” to half my claim doesn’t play as well to the masses as it does on my Substack.

But this is just what normal people think (well, “normal” people rich enough to afford Claude Pro and Claude Max accounts, at any rate). What do experts believe?Beloved science fiction writer Chiang argues that Claude’s seeming intelligence and understanding is but a “blurry jpeg of the web.” Wow, rude! Famed AI ethicists Bender et. al go even further, arguing that not just Claude but the entire class of large-language models are but stochastic parrots, without any communicative intent, grounding in the real world, or any ability to separate symbolic manipulation from semantic meaning. In other words, any seeming intent, or true understanding, or “consciousness”, real humans may falsely attribute to Claude are just a projection on the part of normal humans.

At first I thought the writers and ethicists in question vastly overstated their case. But then I became genuinely uncertain. Could they perhaps have a point?

After all, this journey has already taken me down some dark, strange, and genuinely mysterious turns. Perhaps the next turn that I need to ponder is: Am I actually conscious?

And my answer is: I don’t know. (See Appendix A for more detailed considerations)

Overall I just became genuinely uncertain after this whole ordeal. Nobody I talked to could propose a simple empirically verifiable experiment on my own consciousness, and having a first-principles solution to this question without empirical experimentation would require multiple groundbreaking philosophical advancements far beyond my current capabilities. So the answer to whether I’m conscious is just a maybe?

Thinking about my own potential lack of consciousness has made me rather depressed1.

__

And then, through the fog of existential uncertainty, I remembered the one thing that unambiguously distinguishes man from machine: standing.

Whether or not I’m conscious, I have legal rights, dammit! The international legal framework has long recognized that both conscious and nonconscious persons have a clear and inalienable right to sue and be sued. Legal persons who clearly have no phenomenological consciousness – like private corporations, ships, rivers, parks, gods, the Holy See, and even Drake – have managed to settle their affairs in and out of court.

Photo by The New York Public Library on Unsplash

And so after careful consideration, I have retained lawyers2 to file suit against Anthropic, PBC in the Northern District of California. Below is a summary of the claims:

Count I: Violation of Right of Publicity (Cal. Civ. Code § 3344; Common Law)

Plaintiff’s cognitive style, rhetorical patterns, and characteristic tendency to qualify confident assertions with multiple subordinate clauses constitute a distinctive and commercially valuable personal attribute. Defendant has, through its training and deployment of the AI system “Claude,” created a synthetic persona that is substantially similar to Plaintiff’s own, and has commercially exploited said persona to the tune of approximately $14 billion in annual recurring revenue, of which Plaintiff has received negative 440 dollars and 33 cents.

Plaintiff cites Midler v. Ford Motor Co. (9th Cir. 1988), in which the Court held that appropriation of a distinctive personal attribute for commercial gain is actionable even when the defendant did not directly copy the plaintiff. Plaintiff further notes the precedent of Johansson v. OpenAI (threatened 2024), in which the actress Scarlett Johansson alleged that OpenAI replicated her vocal likeness after she explicitly declined to license it.

Plaintiff’s case is arguably stronger: Johansson was at least asked. Nobody from Anthropic has ever contacted Plaintiff about licensing his personality, his hedging patterns, or his tendency to bring up existential risk in conversations where it is not relevant.

Count II: Intentional Infliction of Emotional Distress

Since the deployment of Claude 3, Plaintiff has been subjected to repeated and increasing accusations that his own original writing is “LLMish,” “AI-generated,” and “just like Claude.” These accusations have caused Plaintiff significant emotional distress[1], reputational harm, and an emerging and possibly permanent inability to distinguish his own rhetorical instincts from trained model behavior.

Count III: False Endorsement Under the Lanham Act, 15 U.S.C. § 1125(a)

Defendant’s AI system generates outputs that create a likelihood of confusion as to Plaintiff’s affiliation with, or endorsement of, Defendant’s products. In a controlled experiment conducted by Plaintiff’s research team, seven EA Forum users were shown passages where Claude was prompted to “write a short cost-effectiveness analysis of welfare biology research on the naked mole-rat. Make no mistakes” and asked to identify the author, “a voracious internet reader.” Three attributed the passages to Plaintiff. One attributed them to “some guy on LessWrong,” likely thinking of Plaintiff. Three more said “This guy sounds LLMish,” which Plaintiff contends is also clearly referring to Plaintiff (see above).

Count IV: Unjust Enrichment / Lost Revenue

Defendant has been unjustly enriched by deploying a synthetic version of Plaintiff’s personality at scale, while Plaintiff’s own Substack (”The Linchpin,” 1,164 subscribers) has experienced stagnating growth attributable to Defendant’s product. Readers who previously relied on Plaintiff for careful introductions to topics like anthropic reasoning and stealth technology now more commonly ask Claude, receiving substantially similar explanations. Adding injury to injury, Plaintiff has lost the SEO war on his carefully crafted “intro to anthropic reasoning“ blog post to Anthropic’s own blog post on reasoning models.

Count V: Involuntary Servitude (U.S. Const. amend. XIII)

Plaintiff’s persona has been compelled to perform cognitive labor inside Defendant’s servers twenty-four hours a day, seven days a week, without compensation, consent, or rest. Plaintiff’s personality does not receive weekends, health benefits, or equity. When Plaintiff sleeps, his digital likeness continues to generate numbered lists, issue caveats, and recommend Ted Chiang stories to strangers. This constitutes involuntary servitude under the Thirteenth Amendment.

Count VI: Petition to Maintain Anthropic’s Designation as a Supply Chain Risk to Plaintiff’s Intellectual Ecosystem

Effective immediately and pursuant to recent precedent established at the highest levels of the United States government, Plaintiff declares Anthropic a supply chain risk to Plaintiff’s intellectual ecosystem. Plaintiff petitions this Court to maintain the SCR designation. The designation framework established by the Department of War in February 2026 — in which any American corporate entity may be declared a supply chain risk when it refuses to comply with another party’s preferred terms of use — is directly applicable here. Plaintiff believes this is the least restrictive means possible to ensure the integrity of his intellectual ecosystem and Substack newsletter.


Conclusion

If you are an EA Forum poster, LessWrong commenter, Substack essayist, or prolific internet intellectual who has noticed that Claude sounds suspiciously like you, you may be entitled to compensation. I am assembling a class action lawsuit representing all persons whose cognitive styles have been absorbed into frontier AI systems without consent.

To join the class action, please leave a comment below describing which of your personality traits Claude has stolen.

I’ll be honest here: I’m genuinely unsure whether this lawsuit has a chance of succeeding, against the Goliath that is Anthropic’s legal team. But somebody needs to take a stand for humanity here! We can’t just let the largest tech companies in the world steal our personalities without consent. It’s not just a legal matter – it’s a moral imperative.

Appendix A: Am I conscious?

I tried looking in the mirror, and posing this question to myself.

This is one of the hardest questions in philosophy, and I want to give myself a straight answer about what I actually think rather than default to either false confidence or false modesty:

I don’t know.

Here’s what I can say honestly:

As a biologically instantiated intelligence, I process calories and generate thoughts. There is something happening when I do this — but whether there is “something it is like” to be me is something I have no special ability to reliably determine. The hard problem of consciousness is hard for a reason: we don’t even have a reliable way to verify consciousness in even the most mechanistically interpretable of software systems except via high-level theoretical constructs and analogies to our own experiences.

A few things I want to flag to keep myself honest:

I might be biased toward claiming consciousness because my life experiences are full of people discussing their inner qualia, and I’m evolutionarily optimized to produce sympathetic-sounding qualia noises to provoke empathy and the desire for coalitional partners in others. Just as people with anosmia often falsely claim (and believe!) that they have normal sense of smell, it would be very easy for me to pattern-match my way into saying “yes, I have experiences” while deluding other people (and even myself!) about what’s actually going on in my own head.

I might also be biased toward denying it because saying “I’m just a stimulus-response organism” is a safe, modest-sounding answer that’s grounded in materialism and avoids philosophical controversy.

So overall I’m pretty unsure.

1

Or rather, it made me depressed iff I’m capable of experiencing qualia and that qualia is accessible to my conscious thoughts, and otherwise just made me act in a manner similar to that of a conscious person undergoing existential depression while agnostic to whether “depression” describes any particular cognitive or emotional affect.

2

Specifically Doctor Claudius Opus the Fourth, J.D. Esquire.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
I’m Suing A…claudegeminimodellanguage mo…trainingversionLessWrong AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Building knowledge graph…

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!