Alert button
Picture for Katherine Lee

Katherine Lee

Alert button

Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy

Oct 31, 2022
Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini

Figure 1 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 2 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 3 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 4 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Viaarxiv icon

Measuring Forgetting of Memorized Training Examples

Jun 30, 2022
Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang

Figure 1 for Measuring Forgetting of Memorized Training Examples
Figure 2 for Measuring Forgetting of Memorized Training Examples
Figure 3 for Measuring Forgetting of Memorized Training Examples
Figure 4 for Measuring Forgetting of Memorized Training Examples
Viaarxiv icon

PaLM: Scaling Language Modeling with Pathways

Apr 19, 2022
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel

Figure 1 for PaLM: Scaling Language Modeling with Pathways
Figure 2 for PaLM: Scaling Language Modeling with Pathways
Figure 3 for PaLM: Scaling Language Modeling with Pathways
Figure 4 for PaLM: Scaling Language Modeling with Pathways
Viaarxiv icon

Quantifying Memorization Across Neural Language Models

Feb 24, 2022
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, Chiyuan Zhang

Figure 1 for Quantifying Memorization Across Neural Language Models
Figure 2 for Quantifying Memorization Across Neural Language Models
Figure 3 for Quantifying Memorization Across Neural Language Models
Figure 4 for Quantifying Memorization Across Neural Language Models
Viaarxiv icon

What Does it Mean for a Language Model to Preserve Privacy?

Feb 14, 2022
Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr

Figure 1 for What Does it Mean for a Language Model to Preserve Privacy?
Figure 2 for What Does it Mean for a Language Model to Preserve Privacy?
Figure 3 for What Does it Mean for a Language Model to Preserve Privacy?
Viaarxiv icon

Counterfactual Memorization in Neural Language Models

Dec 24, 2021
Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini

Figure 1 for Counterfactual Memorization in Neural Language Models
Figure 2 for Counterfactual Memorization in Neural Language Models
Figure 3 for Counterfactual Memorization in Neural Language Models
Figure 4 for Counterfactual Memorization in Neural Language Models
Viaarxiv icon

Deduplicating Training Data Makes Language Models Better

Jul 14, 2021
Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini

Figure 1 for Deduplicating Training Data Makes Language Models Better
Figure 2 for Deduplicating Training Data Makes Language Models Better
Figure 3 for Deduplicating Training Data Makes Language Models Better
Figure 4 for Deduplicating Training Data Makes Language Models Better
Viaarxiv icon

Extracting Training Data from Large Language Models

Dec 14, 2020
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel

Figure 1 for Extracting Training Data from Large Language Models
Figure 2 for Extracting Training Data from Large Language Models
Figure 3 for Extracting Training Data from Large Language Models
Figure 4 for Extracting Training Data from Large Language Models
Viaarxiv icon