Alert button
Picture for Xiang Lisa Li

Xiang Lisa Li

Alert button

On the Learnability of Watermarks for Language Models

Dec 07, 2023
Chenchen Gu, Xiang Lisa Li, Percy Liang, Tatsunori Hashimoto

Viaarxiv icon

Benchmarking and Improving Generator-Validator Consistency of Language Models

Oct 03, 2023
Xiang Lisa Li, Vaishnavi Shrivastava, Siyan Li, Tatsunori Hashimoto, Percy Liang

Figure 1 for Benchmarking and Improving Generator-Validator Consistency of Language Models
Figure 2 for Benchmarking and Improving Generator-Validator Consistency of Language Models
Figure 3 for Benchmarking and Improving Generator-Validator Consistency of Language Models
Figure 4 for Benchmarking and Improving Generator-Validator Consistency of Language Models
Viaarxiv icon

Learning to Compress Prompts with Gist Tokens

Apr 17, 2023
Jesse Mu, Xiang Lisa Li, Noah Goodman

Figure 1 for Learning to Compress Prompts with Gist Tokens
Figure 2 for Learning to Compress Prompts with Gist Tokens
Figure 3 for Learning to Compress Prompts with Gist Tokens
Figure 4 for Learning to Compress Prompts with Gist Tokens
Viaarxiv icon

Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP

Dec 28, 2022
Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, Matei Zaharia

Figure 1 for Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Figure 2 for Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Figure 3 for Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Viaarxiv icon

Evaluating Human-Language Model Interaction

Dec 20, 2022
Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael Bernstein, Percy Liang

Figure 1 for Evaluating Human-Language Model Interaction
Figure 2 for Evaluating Human-Language Model Interaction
Figure 3 for Evaluating Human-Language Model Interaction
Figure 4 for Evaluating Human-Language Model Interaction
Viaarxiv icon

Contrastive Decoding: Open-ended Text Generation as Optimization

Oct 27, 2022
Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis

Figure 1 for Contrastive Decoding: Open-ended Text Generation as Optimization
Figure 2 for Contrastive Decoding: Open-ended Text Generation as Optimization
Figure 3 for Contrastive Decoding: Open-ended Text Generation as Optimization
Figure 4 for Contrastive Decoding: Open-ended Text Generation as Optimization
Viaarxiv icon

Diffusion-LM Improves Controllable Text Generation

May 27, 2022
Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, Tatsunori B. Hashimoto

Figure 1 for Diffusion-LM Improves Controllable Text Generation
Figure 2 for Diffusion-LM Improves Controllable Text Generation
Figure 3 for Diffusion-LM Improves Controllable Text Generation
Figure 4 for Diffusion-LM Improves Controllable Text Generation
Viaarxiv icon

On the Opportunities and Risks of Foundation Models

Aug 18, 2021
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

Figure 1 for On the Opportunities and Risks of Foundation Models
Figure 2 for On the Opportunities and Risks of Foundation Models
Figure 3 for On the Opportunities and Risks of Foundation Models
Figure 4 for On the Opportunities and Risks of Foundation Models
Viaarxiv icon

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Jan 01, 2021
Xiang Lisa Li, Percy Liang

Figure 1 for Prefix-Tuning: Optimizing Continuous Prompts for Generation
Figure 2 for Prefix-Tuning: Optimizing Continuous Prompts for Generation
Figure 3 for Prefix-Tuning: Optimizing Continuous Prompts for Generation
Figure 4 for Prefix-Tuning: Optimizing Continuous Prompts for Generation
Viaarxiv icon