Alert button
Picture for Yuexiang Zhai

Yuexiang Zhai

Alert button

Is Offline Decision Making Possible with Only Few Samples? Reliable Decisions in Data-Starved Bandits via Trust Region Enhancement

Add code
Bookmark button
Alert button
Feb 24, 2024
Ruiqi Zhang, Yuexiang Zhai, Andrea Zanette

Viaarxiv icon

Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs

Add code
Bookmark button
Alert button
Jan 11, 2024
Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, Saining Xie

Viaarxiv icon

LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models

Add code
Bookmark button
Alert button
Nov 30, 2023
Marwa Abdulhai, Isadora White, Charlie Snell, Charles Sun, Joey Hong, Yuexiang Zhai, Kelvin Xu, Sergey Levine

Viaarxiv icon

White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?

Add code
Bookmark button
Alert button
Nov 24, 2023
Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma

Viaarxiv icon

RLIF: Interactive Imitation Learning as Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 21, 2023
Jianlan Luo, Perry Dong, Yuexiang Zhai, Yi Ma, Sergey Levine

Viaarxiv icon

Investigating the Catastrophic Forgetting in Multimodal Large Language Models

Add code
Bookmark button
Alert button
Sep 26, 2023
Yuexiang Zhai, Shengbang Tong, Xiao Li, Mu Cai, Qing Qu, Yong Jae Lee, Yi Ma

Figure 1 for Investigating the Catastrophic Forgetting in Multimodal Large Language Models
Figure 2 for Investigating the Catastrophic Forgetting in Multimodal Large Language Models
Figure 3 for Investigating the Catastrophic Forgetting in Multimodal Large Language Models
Figure 4 for Investigating the Catastrophic Forgetting in Multimodal Large Language Models
Viaarxiv icon

Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning

Add code
Bookmark button
Alert button
Mar 09, 2023
Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine

Figure 1 for Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Figure 2 for Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Figure 3 for Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Figure 4 for Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Viaarxiv icon

Closed-Loop Transcription via Convolutional Sparse Coding

Add code
Bookmark button
Alert button
Feb 18, 2023
Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, XIaojun Yuan, Heung-Yeung Shum, Lionel M. Ni, Yi Ma

Figure 1 for Closed-Loop Transcription via Convolutional Sparse Coding
Figure 2 for Closed-Loop Transcription via Convolutional Sparse Coding
Figure 3 for Closed-Loop Transcription via Convolutional Sparse Coding
Figure 4 for Closed-Loop Transcription via Convolutional Sparse Coding
Viaarxiv icon

Understanding the Complexity Gains of Single-Task RL with a Curriculum

Add code
Bookmark button
Alert button
Dec 24, 2022
Qiyang Li, Yuexiang Zhai, Yi Ma, Sergey Levine

Figure 1 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Figure 2 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Figure 3 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Figure 4 for Understanding the Complexity Gains of Single-Task RL with a Curriculum
Viaarxiv icon