Alert button
Picture for Ran Tian

Ran Tian

Alert button

A General Calibrated Regret Metric for Detecting and Mitigating Human-Robot Interaction Failures

Add code
Bookmark button
Alert button
Mar 07, 2024
Kensuke Nakamura, Ran Tian, Andrea Bajcsy

Figure 1 for A General Calibrated Regret Metric for Detecting and Mitigating Human-Robot Interaction Failures
Figure 2 for A General Calibrated Regret Metric for Detecting and Mitigating Human-Robot Interaction Failures
Figure 3 for A General Calibrated Regret Metric for Detecting and Mitigating Human-Robot Interaction Failures
Figure 4 for A General Calibrated Regret Metric for Detecting and Mitigating Human-Robot Interaction Failures
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Bookmark button
Alert button
Oct 17, 2023
Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Gregory Kahn, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jad Abou-Chakra, Jaehyung Kim, Jan Peters, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shubham Sonawani, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-Hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui

Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

What Matters to You? Towards Visual Representation Alignment for Robot Learning

Add code
Bookmark button
Alert button
Oct 11, 2023
Ran Tian, Chenfeng Xu, Masayoshi Tomizuka, Jitendra Malik, Andrea Bajcsy

Figure 1 for What Matters to You? Towards Visual Representation Alignment for Robot Learning
Figure 2 for What Matters to You? Towards Visual Representation Alignment for Robot Learning
Figure 3 for What Matters to You? Towards Visual Representation Alignment for Robot Learning
Figure 4 for What Matters to You? Towards Visual Representation Alignment for Robot Learning
Viaarxiv icon

Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization

Add code
Bookmark button
Alert button
Oct 11, 2023
Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan

Figure 1 for Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization
Figure 2 for Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization
Figure 3 for Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization
Figure 4 for Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization
Viaarxiv icon

Towards Modeling and Influencing the Dynamics of Human Learning

Add code
Bookmark button
Alert button
Jan 02, 2023
Ran Tian, Masayoshi Tomizuka, Anca Dragan, Andrea Bajcsy

Figure 1 for Towards Modeling and Influencing the Dynamics of Human Learning
Figure 2 for Towards Modeling and Influencing the Dynamics of Human Learning
Figure 3 for Towards Modeling and Influencing the Dynamics of Human Learning
Figure 4 for Towards Modeling and Influencing the Dynamics of Human Learning
Viaarxiv icon

Simple Recurrence Improves Masked Language Models

Add code
Bookmark button
Alert button
May 23, 2022
Tao Lei, Ran Tian, Jasmijn Bastings, Ankur P. Parikh

Figure 1 for Simple Recurrence Improves Masked Language Models
Figure 2 for Simple Recurrence Improves Masked Language Models
Figure 3 for Simple Recurrence Improves Masked Language Models
Figure 4 for Simple Recurrence Improves Masked Language Models
Viaarxiv icon

Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models

Add code
Bookmark button
Alert button
Sep 29, 2021
Ran Tian, Liting Sun, Andrea Bajcsy, Masayoshi Tomizuka, Anca D. Dragan

Figure 1 for Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models
Figure 2 for Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models
Figure 3 for Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models
Figure 4 for Safety Assurances for Human-Robot Interaction via Confidence-aware Game-theoretic Human Models
Viaarxiv icon

Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots

Add code
Bookmark button
Alert button
Sep 26, 2021
Ran Tian, Liting Sun, Masayoshi Tomizuka, David Isele

Figure 1 for Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots
Figure 2 for Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots
Figure 3 for Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots
Figure 4 for Anytime Game-Theoretic Planning with Active Reasoning About Humans' Latent States for Human-Centered Robots
Viaarxiv icon

Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning

Add code
Bookmark button
Alert button
Aug 30, 2021
Ran Tian, Joshua Maynez, Ankur P. Parikh

Figure 1 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 2 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 3 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Figure 4 for Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Viaarxiv icon