Alert button
Picture for Ananya Kumar

Ananya Kumar

Alert button

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

Add code
Bookmark button
Alert button
Jul 18, 2022
Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan

Figure 1 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 2 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 3 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 4 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Viaarxiv icon

Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations

Add code
Bookmark button
Alert button
Apr 06, 2022
Jeff Z. HaoChen, Colin Wei, Ananya Kumar, Tengyu Ma

Figure 1 for Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
Figure 2 for Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
Viaarxiv icon

Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation

Add code
Bookmark button
Alert button
Apr 01, 2022
Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. HaoChen, Tengyu Ma, Percy Liang

Figure 1 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Figure 2 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Figure 3 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Figure 4 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Viaarxiv icon

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

Add code
Bookmark button
Alert button
Feb 21, 2022
Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang

Figure 1 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 2 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 3 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 4 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Viaarxiv icon

Extending the WILDS Benchmark for Unsupervised Adaptation

Add code
Bookmark button
Alert button
Dec 09, 2021
Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang

Figure 1 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 2 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 3 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 4 for Extending the WILDS Benchmark for Unsupervised Adaptation
Viaarxiv icon

No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets

Add code
Bookmark button
Alert button
Sep 12, 2021
Fahim Tajwar, Ananya Kumar, Sang Michael Xie, Percy Liang

Figure 1 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Figure 2 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Figure 3 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Figure 4 for No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Viaarxiv icon

On the Opportunities and Risks of Foundation Models

Add code
Bookmark button
Alert button
Aug 18, 2021
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

Figure 1 for On the Opportunities and Risks of Foundation Models
Figure 2 for On the Opportunities and Risks of Foundation Models
Figure 3 for On the Opportunities and Risks of Foundation Models
Figure 4 for On the Opportunities and Risks of Foundation Models
Viaarxiv icon

In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness

Add code
Bookmark button
Alert button
Dec 16, 2020
Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, Percy Liang

Figure 1 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Figure 2 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Figure 3 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Figure 4 for In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Viaarxiv icon