Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessReally, you made this without AI? Prove itThe Verge AIAfter fighting malware for decades, this cybersecurity veteran is now hacking dronesTechCrunch AIA new AI tool found a way to combine ChatGPT, Gemini, Claude, and Sonar, and it’s on sale - Popular ScienceGoogle News: ChatGPTThis game concept artist explores fantastical ideas in historical settingsCreative Bloq AI DesignGMI Cloud Highlights Cost-Efficient GPU Infrastructure for Generative AI Startup - TipRanksGoogle News: Generative AIWhere Will Nvidia Stock Be in 5 Years? - The Motley FoolGoogle News: AI1 dead in Peru football stadium tragedy, dozens injuredSCMP Tech (Asia AI)🔥 imbue-ai/mngrGitHub Trending🔥 HKUDS/LightRAGGitHub Trending🔥 block/gooseGitHub Trending🔥 ml-explore/mlx-lmGitHub Trending10 Best Tools to Get Cited by ChatGPT in 2026 - MEXC ExchangeGoogle News: ChatGPTBlack Hat USADark ReadingBlack Hat AsiaAI BusinessReally, you made this without AI? Prove itThe Verge AIAfter fighting malware for decades, this cybersecurity veteran is now hacking dronesTechCrunch AIA new AI tool found a way to combine ChatGPT, Gemini, Claude, and Sonar, and it’s on sale - Popular ScienceGoogle News: ChatGPTThis game concept artist explores fantastical ideas in historical settingsCreative Bloq AI DesignGMI Cloud Highlights Cost-Efficient GPU Infrastructure for Generative AI Startup - TipRanksGoogle News: Generative AIWhere Will Nvidia Stock Be in 5 Years? - The Motley FoolGoogle News: AI1 dead in Peru football stadium tragedy, dozens injuredSCMP Tech (Asia AI)🔥 imbue-ai/mngrGitHub Trending🔥 HKUDS/LightRAGGitHub Trending🔥 block/gooseGitHub Trending🔥 ml-explore/mlx-lmGitHub Trending10 Best Tools to Get Cited by ChatGPT in 2026 - MEXC ExchangeGoogle News: ChatGPT
AI NEWS HUBbyEIGENVECTOREigenvector

Authorship Impersonation via LLM Prompting does not Evade Authorship Verification Methods

arXiv cs.CLby Baoyi Zeng, Andrea NiniApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.29454v1 Announce Type: new Abstract: Authorship verification (AV), the task of determining whether a questioned text was written by a specific individual, is a critical part of forensic linguistics. While manual authorial impersonation by perpetrators has long been a recognized threat in historical forensic cases, recent advances in large language models (LLMs) raise new challenges, as adversaries may exploit these tools to impersonate another's writing. This study investigates whether prompted LLMs can generate convincing authorial impersonations and whether such outputs can evade existing forensic AV systems. Using GPT-4o as the adversary model, we generated impersonation texts under four prompting conditions across three genres: emails, text messages, and social media posts.

View PDF HTML (experimental)

Abstract:Authorship verification (AV), the task of determining whether a questioned text was written by a specific individual, is a critical part of forensic linguistics. While manual authorial impersonation by perpetrators has long been a recognized threat in historical forensic cases, recent advances in large language models (LLMs) raise new challenges, as adversaries may exploit these tools to impersonate another's writing. This study investigates whether prompted LLMs can generate convincing authorial impersonations and whether such outputs can evade existing forensic AV systems. Using GPT-4o as the adversary model, we generated impersonation texts under four prompting conditions across three genres: emails, text messages, and social media posts. We then evaluated these outputs against both non-neural AV methods (n-gram tracing, Ranking-Based Impostors Method, LambdaG) and neural approaches (AdHominem, LUAR, STAR) within a likelihood-ratio framework. Results show that LLM-generated texts failed to sufficiently replicate authorial individuality to bypass established AV systems. We also observed that some methods achieved even higher accuracy when rejecting impersonation texts compared to genuine negative samples. Overall, these findings indicate that, despite the accessibility of LLMs, current AV systems remain robust against entry-level impersonation attempts across multiple genres. Furthermore, we demonstrate that this counter-intuitive resilience stems, at least in part, from the higher lexical diversity and entropy inherent in LLM-generated texts.

Comments: 11 pages, 3 figures

Subjects:

Computation and Language (cs.CL)

Cite as: arXiv:2603.29454 [cs.CL]

(or arXiv:2603.29454v1 [cs.CL] for this version)

https://doi.org/10.48550/arXiv.2603.29454

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Baoyi Zeng [view email] [v1] Tue, 31 Mar 2026 08:59:09 UTC (83 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modelannounce

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Authorship …modellanguage mo…announcestudyarxivfindingsarXiv cs.CL

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 185 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!