Alert button
Picture for Hao-Tien Lewis Chiang

Hao-Tien Lewis Chiang

Alert button

Google

Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

Add code
Bookmark button
Alert button
Oct 17, 2023
Qiping Zhang, Nathan Tsoi, Booyeon Choi, Jie Tan, Hao-Tien Lewis Chiang, Marynel Vázquez

Viaarxiv icon

Principles and Guidelines for Evaluating Social Robot Navigation Algorithms

Add code
Bookmark button
Alert button
Jun 29, 2023
Anthony Francis, Claudia Perez-D'Arpino, Chengshu Li, Fei Xia, Alexandre Alahi, Rachid Alami, Aniket Bera, Abhijat Biswas, Joydeep Biswas, Rohan Chandra, Hao-Tien Lewis Chiang, Michael Everett, Sehoon Ha, Justin Hart, Jonathan P. How, Haresh Karnan, Tsang-Wei Edward Lee, Luis J. Manso, Reuth Mirksy, Soeren Pirk, Phani Teja Singamaneni, Peter Stone, Ada V. Taylor, Peter Trautman, Nathan Tsoi, Marynel Vazquez, Xuesu Xiao, Peng Xu, Naoki Yokoyama, Alexander Toshev, Roberto Martin-Martin

Figure 1 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 2 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 3 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 4 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Viaarxiv icon

Language to Rewards for Robotic Skill Synthesis

Add code
Bookmark button
Alert button
Jun 16, 2023
Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia

Figure 1 for Language to Rewards for Robotic Skill Synthesis
Figure 2 for Language to Rewards for Robotic Skill Synthesis
Figure 3 for Language to Rewards for Robotic Skill Synthesis
Figure 4 for Language to Rewards for Robotic Skill Synthesis
Viaarxiv icon

Scene Transformer: A unified multi-task model for behavior prediction and planning

Add code
Bookmark button
Alert button
Jun 15, 2021
Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens

Figure 1 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 2 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 3 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 4 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Viaarxiv icon

RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies

Add code
Bookmark button
Alert button
Jul 12, 2019
Hao-Tien Lewis Chiang, Jasmine Hsu, Marek Fiser, Lydia Tapia, Aleksandra Faust

Figure 1 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 2 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 3 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 4 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Viaarxiv icon

Long-Range Indoor Navigation with PRM-RL

Add code
Bookmark button
Alert button
Feb 25, 2019
Anthony Francis, Aleksandra Faust, Hao-Tien Lewis Chiang, Jasmine Hsu, J. Chase Kew, Marek Fiser, Tsang-Wei Edward Lee

Figure 1 for Long-Range Indoor Navigation with PRM-RL
Figure 2 for Long-Range Indoor Navigation with PRM-RL
Figure 3 for Long-Range Indoor Navigation with PRM-RL
Figure 4 for Long-Range Indoor Navigation with PRM-RL
Viaarxiv icon

Learning Navigation Behaviors End-to-End with AutoRL

Add code
Bookmark button
Alert button
Feb 01, 2019
Hao-Tien Lewis Chiang, Aleksandra Faust, Marek Fiser, Anthony Francis

Figure 1 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 2 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 3 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 4 for Learning Navigation Behaviors End-to-End with AutoRL
Viaarxiv icon

PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning

Add code
Bookmark button
Alert button
Nov 30, 2018
Aleksandra Faust, Hao-Tien Lewis Chiang, Lydia Tapia

Figure 1 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 2 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 3 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 4 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Viaarxiv icon