Undetectable Watermarks for Language Models

September 29, 2023
11:30am to 12:30pm
JCC 265
Speaker: Miranda Christ
Host: Megumi Ando

Abstract

Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested methods of embedding watermarks in model outputs, by noticeably altering the output distribution. We ask: Is it possible to introduce a watermark without incurring any detectable change to the output distribution?

To this end, we introduce a cryptographically-inspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of one-way functions, a standard assumption in cryptography.

This is joint work with Sam Gunn and Or Zamir.

Bio:

Miranda Christ is a Computer Science Ph.D. student in the theory group at Columbia University, where she is fortunate to be co-advised by Tal Malkin and Mihalis Yannakakis. Her research interests include cryptography, privacy, and complexity theory.