Refined Detection for Gumbel Watermarking
arXiv:2603.30017v1 Announce Type: cross Abstract: We propose a simple detection mechanism for the Gumbel watermarking scheme proposed by Aaronson (2022). The new mechanism is proven to be near-optimal in a problem-dependent sense among all model-agnostic watermarking schemes under the assumption that the next-token distribution is sampled i.i.d.
View PDF HTML (experimental)
Abstract:We propose a simple detection mechanism for the Gumbel watermarking scheme proposed by Aaronson (2022). The new mechanism is proven to be near-optimal in a problem-dependent sense among all model-agnostic watermarking schemes under the assumption that the next-token distribution is sampled i.i.d.
Subjects:
Machine Learning (cs.LG); Cryptography and Security (cs.CR); Machine Learning (stat.ML)
Cite as: arXiv:2603.30017 [cs.LG]
(or arXiv:2603.30017v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2603.30017
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Tor Lattimore [view email] [v1] Tue, 31 Mar 2026 17:16:27 UTC (128 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncearxiv
What we can learn from Avocado: The unreleased AI Meta’s model
In the competitive landscape of AI agents, where businesses are closing investment deals everyday to build and expand their AI infrastructure and software, the companies that seemed to be leading the race are OpenAI, Anthropic, Microsoft, NVIDIA, Google, and Amazon. But despite the success of its large language models (LLMs) family, one of the big [ ] This story continues at The Next Web
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

What we can learn from Avocado: The unreleased AI Meta’s model
In the competitive landscape of AI agents, where businesses are closing investment deals everyday to build and expand their AI infrastructure and software, the companies that seemed to be leading the race are OpenAI, Anthropic, Microsoft, NVIDIA, Google, and Amazon. But despite the success of its large language models (LLMs) family, one of the big [ ] This story continues at The Next Web

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!