Picture for Ananya Kumar

Ananya Kumar

Tony

Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?

Add code
Nov 25, 2022
Viaarxiv icon

How to Fine-Tune Vision Models with SGD

Add code
Nov 17, 2022
Viaarxiv icon

Holistic Evaluation of Language Models

Add code
Nov 16, 2022
Figure 1 for Holistic Evaluation of Language Models
Figure 2 for Holistic Evaluation of Language Models
Figure 3 for Holistic Evaluation of Language Models
Figure 4 for Holistic Evaluation of Language Models
Viaarxiv icon

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

Add code
Oct 20, 2022
Figure 1 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 2 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 3 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 4 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Viaarxiv icon

Are Sample-Efficient NLP Models More Robust?

Add code
Oct 12, 2022
Figure 1 for Are Sample-Efficient NLP Models More Robust?
Figure 2 for Are Sample-Efficient NLP Models More Robust?
Figure 3 for Are Sample-Efficient NLP Models More Robust?
Figure 4 for Are Sample-Efficient NLP Models More Robust?
Viaarxiv icon

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

Add code
Jul 18, 2022
Figure 1 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 2 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 3 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 4 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Viaarxiv icon

Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations

Add code
Apr 06, 2022
Figure 1 for Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
Figure 2 for Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
Viaarxiv icon

Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation

Add code
Apr 01, 2022
Figure 1 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Figure 2 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Figure 3 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Figure 4 for Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
Viaarxiv icon

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

Add code
Feb 21, 2022
Figure 1 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 2 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 3 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 4 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Viaarxiv icon

Extending the WILDS Benchmark for Unsupervised Adaptation

Add code
Dec 09, 2021
Figure 1 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 2 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 3 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 4 for Extending the WILDS Benchmark for Unsupervised Adaptation
Viaarxiv icon