Paper 2023/763

Undetectable Watermarks for Language Models

Miranda Christ, Columbia University
Sam Gunn, University of California, Berkeley
Or Zamir, Princeton University
Abstract

Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested methods of embedding watermarks in model outputs, by $\textit{noticeably}$ altering the output distribution. We ask: Is it possible to introduce a watermark without incurring $\textit{any detectable}$ change to the output distribution? To this end we introduce a cryptographically-inspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of one-way functions, a standard assumption in cryptography.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint.
Keywords
Large Language ModelsMachine LearningWatermarksSteganography
Contact author(s)
mchrist @ cs columbia edu
gunn @ berkeley edu
orzamir @ princeton edu
History
2023-05-30: approved
2023-05-26: received
See all versions
Short URL
https://ia.cr/2023/763
License
Creative Commons Attribution-NonCommercial-NoDerivs
CC BY-NC-ND

BibTeX

@misc{cryptoeprint:2023/763,
      author = {Miranda Christ and Sam Gunn and Or Zamir},
      title = {Undetectable Watermarks for Language Models},
      howpublished = {Cryptology ePrint Archive, Paper 2023/763},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/763}},
      url = {https://eprint.iacr.org/2023/763}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.