Alert button
Picture for Tengyu Ma

Tengyu Ma

Alert button

Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation

Sep 07, 2023
Xiaohan Cui, Long Ma, Tengyu Ma, Jinyuan Liu, Xin Fan, Risheng Liu

Figure 1 for Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation
Figure 2 for Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation
Figure 3 for Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation
Figure 4 for Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation
Viaarxiv icon

Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization

Jul 23, 2023
Kaiyue Wen, Zhiyuan Li, Tengyu Ma

Figure 1 for Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Figure 2 for Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Figure 3 for Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Figure 4 for Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Viaarxiv icon

One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention

Jul 07, 2023
Arvind Mahankali, Tatsunori B. Hashimoto, Tengyu Ma

Viaarxiv icon

Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time

Jun 28, 2023
Arvind Mahankali, Jeff Z. Haochen, Kefan Dong, Margalit Glasgow, Tengyu Ma

Viaarxiv icon

The Inductive Bias of Flatness Regularization for Deep Matrix Factorization

Jun 22, 2023
Khashayar Gatmiry, Zhiyuan Li, Ching-Yao Chuang, Sashank Reddi, Tengyu Ma, Stefanie Jegelka

Figure 1 for The Inductive Bias of Flatness Regularization for Deep Matrix Factorization
Figure 2 for The Inductive Bias of Flatness Regularization for Deep Matrix Factorization
Figure 3 for The Inductive Bias of Flatness Regularization for Deep Matrix Factorization
Figure 4 for The Inductive Bias of Flatness Regularization for Deep Matrix Factorization
Viaarxiv icon

Large Language Models as Tool Makers

May 26, 2023
Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, Denny Zhou

Figure 1 for Large Language Models as Tool Makers
Figure 2 for Large Language Models as Tool Makers
Figure 3 for Large Language Models as Tool Makers
Figure 4 for Large Language Models as Tool Makers
Viaarxiv icon

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

May 24, 2023
Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu

Figure 1 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 2 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 3 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 4 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Viaarxiv icon

Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training

May 23, 2023
Hong Liu, Zhiyuan Li, David Hall, Percy Liang, Tengyu Ma

Figure 1 for Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Figure 2 for Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Figure 3 for Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Figure 4 for Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Viaarxiv icon

Symbol tuning improves in-context learning in language models

May 15, 2023
Jerry Wei, Le Hou, Andrew Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, Quoc V. Le

Figure 1 for Symbol tuning improves in-context learning in language models
Figure 2 for Symbol tuning improves in-context learning in language models
Figure 3 for Symbol tuning improves in-context learning in language models
Figure 4 for Symbol tuning improves in-context learning in language models
Viaarxiv icon

Toward $L_\infty$-recovery of Nonlinear Functions: A Polynomial Sample Complexity Bound for Gaussian Random Fields

Apr 29, 2023
Kefan Dong, Tengyu Ma

Viaarxiv icon