Picture for Percy Liang

Percy Liang

Shammie

Holistic Evaluation of Language Models

Add code
Nov 16, 2022
Figure 1 for Holistic Evaluation of Language Models
Figure 2 for Holistic Evaluation of Language Models
Figure 3 for Holistic Evaluation of Language Models
Figure 4 for Holistic Evaluation of Language Models
Viaarxiv icon

Contrastive Decoding: Open-ended Text Generation as Optimization

Add code
Oct 27, 2022
Viaarxiv icon

Truncation Sampling as Language Model Desmoothing

Add code
Oct 27, 2022
Viaarxiv icon

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

Add code
Oct 20, 2022
Figure 1 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 2 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 3 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Figure 4 for Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Viaarxiv icon

Deep Bidirectional Language-Knowledge Graph Pretraining

Add code
Oct 19, 2022
Figure 1 for Deep Bidirectional Language-Knowledge Graph Pretraining
Figure 2 for Deep Bidirectional Language-Knowledge Graph Pretraining
Figure 3 for Deep Bidirectional Language-Knowledge Graph Pretraining
Figure 4 for Deep Bidirectional Language-Knowledge Graph Pretraining
Viaarxiv icon

Are Sample-Efficient NLP Models More Robust?

Add code
Oct 12, 2022
Figure 1 for Are Sample-Efficient NLP Models More Robust?
Figure 2 for Are Sample-Efficient NLP Models More Robust?
Figure 3 for Are Sample-Efficient NLP Models More Robust?
Figure 4 for Are Sample-Efficient NLP Models More Robust?
Viaarxiv icon

Improving Self-Supervised Learning by Characterizing Idealized Representations

Add code
Sep 13, 2022
Figure 1 for Improving Self-Supervised Learning by Characterizing Idealized Representations
Figure 2 for Improving Self-Supervised Learning by Characterizing Idealized Representations
Figure 3 for Improving Self-Supervised Learning by Characterizing Idealized Representations
Figure 4 for Improving Self-Supervised Learning by Characterizing Idealized Representations
Viaarxiv icon

What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

Add code
Aug 01, 2022
Figure 1 for What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Figure 2 for What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Figure 3 for What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Figure 4 for What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Viaarxiv icon

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

Add code
Jul 18, 2022
Figure 1 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 2 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 3 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 4 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Viaarxiv icon

Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning

Add code
Jul 15, 2022
Figure 1 for Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning
Figure 2 for Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning
Figure 3 for Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning
Figure 4 for Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning
Viaarxiv icon