Alert button
Picture for Tengyu Ma

Tengyu Ma

Alert button

Symbol tuning improves in-context learning in language models

Add code
Bookmark button
Alert button
May 15, 2023
Jerry Wei, Le Hou, Andrew Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, Quoc V. Le

Figure 1 for Symbol tuning improves in-context learning in language models
Figure 2 for Symbol tuning improves in-context learning in language models
Figure 3 for Symbol tuning improves in-context learning in language models
Figure 4 for Symbol tuning improves in-context learning in language models
Viaarxiv icon

Toward $L_\infty$-recovery of Nonlinear Functions: A Polynomial Sample Complexity Bound for Gaussian Random Fields

Add code
Bookmark button
Alert button
Apr 29, 2023
Kefan Dong, Tengyu Ma

Viaarxiv icon

Larger language models do in-context learning differently

Add code
Bookmark button
Alert button
Mar 08, 2023
Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, Tengyu Ma

Figure 1 for Larger language models do in-context learning differently
Figure 2 for Larger language models do in-context learning differently
Figure 3 for Larger language models do in-context learning differently
Figure 4 for Larger language models do in-context learning differently
Viaarxiv icon

Data Selection for Language Models via Importance Resampling

Add code
Bookmark button
Alert button
Feb 06, 2023
Sang Michael Xie, Shibani Santurkar, Tengyu Ma, Percy Liang

Figure 1 for Data Selection for Language Models via Importance Resampling
Figure 2 for Data Selection for Language Models via Importance Resampling
Figure 3 for Data Selection for Language Models via Importance Resampling
Figure 4 for Data Selection for Language Models via Importance Resampling
Viaarxiv icon

First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains

Add code
Bookmark button
Alert button
Dec 01, 2022
Kefan Dong, Tengyu Ma

Figure 1 for First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains
Figure 2 for First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains
Figure 3 for First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains
Figure 4 for First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains
Viaarxiv icon

What learning algorithm is in-context learning? Investigations with linear models

Add code
Bookmark button
Alert button
Nov 29, 2022
Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou

Figure 1 for What learning algorithm is in-context learning? Investigations with linear models
Figure 2 for What learning algorithm is in-context learning? Investigations with linear models
Figure 3 for What learning algorithm is in-context learning? Investigations with linear models
Figure 4 for What learning algorithm is in-context learning? Investigations with linear models
Viaarxiv icon

A Theoretical Study of Inductive Biases in Contrastive Learning

Add code
Bookmark button
Alert button
Nov 27, 2022
Jeff Z. HaoChen, Tengyu Ma

Figure 1 for A Theoretical Study of Inductive Biases in Contrastive Learning
Figure 2 for A Theoretical Study of Inductive Biases in Contrastive Learning
Figure 3 for A Theoretical Study of Inductive Biases in Contrastive Learning
Figure 4 for A Theoretical Study of Inductive Biases in Contrastive Learning
Viaarxiv icon

How Does Sharpness-Aware Minimization Minimize Sharpness?

Add code
Bookmark button
Alert button
Nov 10, 2022
Kaiyue Wen, Tengyu Ma, Zhiyuan Li

Figure 1 for How Does Sharpness-Aware Minimization Minimize Sharpness?
Figure 2 for How Does Sharpness-Aware Minimization Minimize Sharpness?
Viaarxiv icon

Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models

Add code
Bookmark button
Alert button
Oct 25, 2022
Hong Liu, Sang Michael Xie, Zhiyuan Li, Tengyu Ma

Figure 1 for Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Figure 2 for Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Figure 3 for Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Figure 4 for Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Viaarxiv icon

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

Add code
Bookmark button
Alert button
Jul 18, 2022
Ananya Kumar, Tengyu Ma, Percy Liang, Aditi Raghunathan

Figure 1 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 2 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 3 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Figure 4 for Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
Viaarxiv icon