Alert button
Picture for Yecheng Jason Ma

Yecheng Jason Ma

Alert button

DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

Mar 19, 2024
Alexander Khazatsky, Karl Pertsch, Suraj Nair, Ashwin Balakrishna, Sudeep Dasari, Siddharth Karamcheti, Soroush Nasiriany, Mohan Kumar Srirama, Lawrence Yunliang Chen, Kirsty Ellis, Peter David Fagan, Joey Hejna, Masha Itkina, Marion Lepert, Yecheng Jason Ma, Patrick Tree Miller, Jimmy Wu, Suneel Belkhale, Shivin Dass, Huy Ha, Arhan Jain, Abraham Lee, Youngwoon Lee, Marius Memmel, Sungjae Park, Ilija Radosavovic, Kaiyuan Wang, Albert Zhan, Kevin Black, Cheng Chi, Kyle Beltran Hatch, Shan Lin, Jingpei Lu, Jean Mercat, Abdul Rehman, Pannag R Sanketi, Archit Sharma, Cody Simpson, Quan Vuong, Homer Rich Walke, Blake Wulfe, Ted Xiao, Jonathan Heewon Yang, Arefeh Yavary, Tony Z. Zhao, Christopher Agia, Rohan Baijal, Mateo Guaman Castro, Daphne Chen, Qiuyu Chen, Trinity Chung, Jaimyn Drake, Ethan Paul Foster, Jensen Gao, David Antonio Herrera, Minho Heo, Kyle Hsu, Jiaheng Hu, Donovon Jackson, Charlotte Le, Yunshuang Li, Kevin Lin, Roy Lin, Zehan Ma, Abhiram Maddukuri, Suvir Mirchandani, Daniel Morton, Tony Nguyen, Abigail O'Neill, Rosario Scalise, Derick Seale, Victor Son, Stephen Tian, Emi Tran, Andrew E. Wang, Yilin Wu, Annie Xie, Jingyun Yang, Patrick Yin, Yunchu Zhang, Osbert Bastani, Glen Berseth, Jeannette Bohg, Ken Goldberg, Abhinav Gupta, Abhishek Gupta, Dinesh Jayaraman, Joseph J Lim, Jitendra Malik, Roberto Martín-Martín, Subramanian Ramamoorthy, Dorsa Sadigh, Shuran Song, Jiajun Wu, Michael C. Yip, Yuke Zhu, Thomas Kollar, Sergey Levine, Chelsea Finn

Viaarxiv icon

Eureka: Human-Level Reward Design via Coding Large Language Models

Oct 19, 2023
Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, Anima Anandkumar

Viaarxiv icon

Universal Visual Decomposer: Long-Horizon Manipulation Made Easy

Oct 12, 2023
Zichen Zhang, Yunshuang Li, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Yecheng Jason Ma, Luca Weihs

Viaarxiv icon

LIV: Language-Image Representations and Rewards for Robotic Control

Jun 01, 2023
Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, Dinesh Jayaraman

Figure 1 for LIV: Language-Image Representations and Rewards for Robotic Control
Figure 2 for LIV: Language-Image Representations and Rewards for Robotic Control
Figure 3 for LIV: Language-Image Representations and Rewards for Robotic Control
Figure 4 for LIV: Language-Image Representations and Rewards for Robotic Control
Viaarxiv icon

TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching

May 22, 2023
Yecheng Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh Jayaraman

Figure 1 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Figure 2 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Figure 3 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Figure 4 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Viaarxiv icon

Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

Mar 31, 2023
Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, Franziska Meier

Figure 1 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Figure 2 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Figure 3 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Figure 4 for Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Viaarxiv icon

VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training

Sep 30, 2022
Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang

Figure 1 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 2 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 3 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 4 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Viaarxiv icon

How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression

Jun 07, 2022
Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, Osbert Bastani

Figure 1 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Figure 2 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Figure 3 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Figure 4 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Viaarxiv icon

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

Feb 04, 2022
Yecheng Jason Ma, Andrew Shen, Dinesh Jayaraman, Osbert Bastani

Figure 1 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Figure 2 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Figure 3 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Figure 4 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Viaarxiv icon

Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning

Dec 14, 2021
Yecheng Jason Ma, Andrew Shen, Osbert Bastani, Dinesh Jayaraman

Figure 1 for Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
Figure 2 for Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
Figure 3 for Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
Figure 4 for Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning
Viaarxiv icon