Alert button
Picture for Percy Liang

Percy Liang

Alert button

LinkBERT: Pretraining Language Models with Document Links

Add code
Bookmark button
Alert button
Mar 29, 2022
Michihiro Yasunaga, Jure Leskovec, Percy Liang

Figure 1 for LinkBERT: Pretraining Language Models with Document Links
Figure 2 for LinkBERT: Pretraining Language Models with Document Links
Figure 3 for LinkBERT: Pretraining Language Models with Document Links
Figure 4 for LinkBERT: Pretraining Language Models with Document Links
Viaarxiv icon

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

Add code
Bookmark button
Alert button
Feb 21, 2022
Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang

Figure 1 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 2 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 3 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Figure 4 for Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Viaarxiv icon

CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities

Add code
Bookmark button
Alert button
Jan 25, 2022
Mina Lee, Percy Liang, Qian Yang

Figure 1 for CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Figure 2 for CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Figure 3 for CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Figure 4 for CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities
Viaarxiv icon

GreaseLM: Graph REASoning Enhanced Language Models for Question Answering

Add code
Bookmark button
Alert button
Jan 21, 2022
Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec

Figure 1 for GreaseLM: Graph REASoning Enhanced Language Models for Question Answering
Figure 2 for GreaseLM: Graph REASoning Enhanced Language Models for Question Answering
Figure 3 for GreaseLM: Graph REASoning Enhanced Language Models for Question Answering
Figure 4 for GreaseLM: Graph REASoning Enhanced Language Models for Question Answering
Viaarxiv icon

Extending the WILDS Benchmark for Unsupervised Adaptation

Add code
Bookmark button
Alert button
Dec 09, 2021
Shiori Sagawa, Pang Wei Koh, Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn, Percy Liang

Figure 1 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 2 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 3 for Extending the WILDS Benchmark for Unsupervised Adaptation
Figure 4 for Extending the WILDS Benchmark for Unsupervised Adaptation
Viaarxiv icon

An Explanation of In-context Learning as Implicit Bayesian Inference

Add code
Bookmark button
Alert button
Nov 14, 2021
Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma

Figure 1 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 2 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 3 for An Explanation of In-context Learning as Implicit Bayesian Inference
Figure 4 for An Explanation of In-context Learning as Implicit Bayesian Inference
Viaarxiv icon

LILA: Language-Informed Latent Actions

Add code
Bookmark button
Alert button
Nov 05, 2021
Siddharth Karamcheti, Megha Srivastava, Percy Liang, Dorsa Sadigh

Figure 1 for LILA: Language-Informed Latent Actions
Figure 2 for LILA: Language-Informed Latent Actions
Figure 3 for LILA: Language-Informed Latent Actions
Figure 4 for LILA: Language-Informed Latent Actions
Viaarxiv icon

Large Language Models Can Be Strong Differentially Private Learners

Add code
Bookmark button
Alert button
Oct 12, 2021
Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto

Figure 1 for Large Language Models Can Be Strong Differentially Private Learners
Figure 2 for Large Language Models Can Be Strong Differentially Private Learners
Figure 3 for Large Language Models Can Be Strong Differentially Private Learners
Figure 4 for Large Language Models Can Be Strong Differentially Private Learners
Viaarxiv icon