Alert button
Picture for Sang Michael Xie

Sang Michael Xie

Alert button

An Explanation of In-context Learning as Implicit Bayesian Inference

Nov 03, 2021
Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma

Figure 1 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 2 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 3 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 4 for An Explanation of In-context Learning as Implicit Bayesian Inference
Viaarxiv icon

No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets

Sep 12, 2021
Fahim Tajwar, Ananya Kumar, Sang Michael Xie, Percy Liang

Figure 1 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Figure 2 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Figure 3 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Figure 4 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Viaarxiv icon

On the Opportunities and Risks of Foundation Models

Aug 18, 2021
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

Figure 1 for On the Opportunities and Risks of Foundation Models
Figure 2 for On the Opportunities and Risks of Foundation Models
Figure 3 for On the Opportunities and Risks of Foundation Models
Figure 4 for On the Opportunities and Risks of Foundation Models
Viaarxiv icon

Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning

Jun 17, 2021
Colin Wei, Sang Michael Xie, Tengyu Ma

Figure 1 for Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Figure 2 for Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Figure 3 for Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
Viaarxiv icon

In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness

Dec 16, 2020
Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, Percy Liang

Figure 1 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Figure 2 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Figure 3 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Figure 4 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Viaarxiv icon

WILDS: A Benchmark of in-the-Wild Distribution Shifts

Dec 14, 2020
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, Percy Liang

Figure 1 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Figure 2 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Figure 3 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Figure 4 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Viaarxiv icon

Simplifying Models with Unlabeled Output Data

Jun 29, 2020
Sang Michael Xie, Tengyu Ma, Percy Liang

Figure 1 for Simplifying Models with Unlabeled Output Data
Figure 2 for Simplifying Models with Unlabeled Output Data
Figure 3 for Simplifying Models with Unlabeled Output Data
Figure 4 for Simplifying Models with Unlabeled Output Data
Viaarxiv icon

Understanding and Mitigating the Tradeoff Between Robustness and Accuracy

Feb 25, 2020
Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang

Figure 1 for Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Figure 2 for Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Figure 3 for Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Figure 4 for Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Viaarxiv icon