Alert button
Picture for Aditi Raghunathan

Aditi Raghunathan

Alert button

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

Add code
Bookmark button
Alert button
Feb 21, 2022
Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang

Figure 1 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 2 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 3 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 4 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Viaarxiv icon

An Explanation of In-context Learning as Implicit Bayesian Inference

Add code
Bookmark button
Alert button
Nov 14, 2021
Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma

Figure 1 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 2 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 3 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 4 for An Explanation of In-context Learning as Implicit Bayesian Inference
Viaarxiv icon

On the Opportunities and Risks of Foundation Models

Add code
Bookmark button
Alert button
Aug 18, 2021
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

Figure 1 for On the Opportunities and Risks of Foundation Models
Figure 2 for On the Opportunities and Risks of Foundation Models
Figure 3 for On the Opportunities and Risks of Foundation Models
Figure 4 for On the Opportunities and Risks of Foundation Models
Viaarxiv icon

Just Train Twice: Improving Group Robustness without Training Group Information

Add code
Bookmark button
Alert button
Jul 19, 2021
Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn

Figure 1 for Just Train Twice: Improving Group Robustness without Training Group Information
Figure 2 for Just Train Twice: Improving Group Robustness without Training Group Information
Figure 3 for Just Train Twice: Improving Group Robustness without Training Group Information
Figure 4 for Just Train Twice: Improving Group Robustness without Training Group Information
Viaarxiv icon

Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization

Add code
Bookmark button
Alert button
Jul 09, 2021
John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt

Figure 1 for Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
Figure 2 for Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
Figure 3 for Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
Figure 4 for Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
Viaarxiv icon

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

Add code
Bookmark button
Alert button
Nov 03, 2020
Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli

Figure 1 for Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Figure 2 for Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Figure 3 for Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Figure 4 for Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
Viaarxiv icon

Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning

Add code
Bookmark button
Alert button
Aug 06, 2020
Evan Zheran Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn

Figure 1 for Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning
Figure 2 for Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning
Figure 3 for Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning
Figure 4 for Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning
Viaarxiv icon