An End-to-End Model for Logits-Based Large Language Models Watermarking
arXiv:2505.02344v3 Announce Type: replace Abstract: The rise of LLMs has increased concerns over source tracing and copyright protection for AIGC, highlighting the need for advanced detection technologies. Passive detection methods usually face high false positives, while active watermarking techniques using logits or sampling manipulation offer more effective protection. Existing LLM watermarking methods, though effective on unaltered content, suffer significant performance drops when the text is modified and could introduce biases that degrade LLM performance in downstream tasks. These methods fail to achieve an optimal tradeoff between text quality and robustness, particularly due to the lack of end-to-end optimization of the encoder and decoder. In this paper, we introduce a novel end-
View PDF HTML (experimental)
Abstract:The rise of LLMs has increased concerns over source tracing and copyright protection for AIGC, highlighting the need for advanced detection technologies. Passive detection methods usually face high false positives, while active watermarking techniques using logits or sampling manipulation offer more effective protection. Existing LLM watermarking methods, though effective on unaltered content, suffer significant performance drops when the text is modified and could introduce biases that degrade LLM performance in downstream tasks. These methods fail to achieve an optimal tradeoff between text quality and robustness, particularly due to the lack of end-to-end optimization of the encoder and decoder. In this paper, we introduce a novel end-to-end logits perturbation method for watermarking LLM-generated text. By jointly optimization, our approach achieves a better balance between quality and robustness. To address non-differentiable operations in the end-to-end training pipeline, we introduce an online prompting technique that leverages the on-the-fly LLM as a differentiable surrogate. Our method achieves superior robustness, outperforming distortion-free methods by 37-39% under paraphrasing and 17.2% on average, while maintaining text quality on par with these distortion-free methods in terms of text perplexity and downstream tasks. Our method can be easily generalized to different LLMs. Code is available at this https URL.
Subjects:
Cryptography and Security (cs.CR)
Cite as: arXiv:2505.02344 [cs.CR]
(or arXiv:2505.02344v3 [cs.CR] for this version)
https://doi.org/10.48550/arXiv.2505.02344
arXiv-issued DOI via DataCite
Submission history
From: Ka Him Wong [view email] [v1] Mon, 5 May 2025 03:50:28 UTC (4,814 KB) [v2] Thu, 22 May 2025 06:06:24 UTC (4,907 KB) [v3] Thu, 2 Apr 2026 00:05:59 UTC (2,616 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.







Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!