Picture for Hao-Tien Lewis Chiang

Hao-Tien Lewis Chiang

Google

Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

Oct 17, 2023
Viaarxiv icon

Principles and Guidelines for Evaluating Social Robot Navigation Algorithms

Add code
Jun 29, 2023
Figure 1 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 2 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 3 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 4 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Viaarxiv icon

Language to Rewards for Robotic Skill Synthesis

Add code
Jun 16, 2023
Figure 1 for Language to Rewards for Robotic Skill Synthesis
Figure 2 for Language to Rewards for Robotic Skill Synthesis
Figure 3 for Language to Rewards for Robotic Skill Synthesis
Figure 4 for Language to Rewards for Robotic Skill Synthesis
Viaarxiv icon

Scene Transformer: A unified multi-task model for behavior prediction and planning

Jun 15, 2021
Figure 1 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 2 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 3 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 4 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Viaarxiv icon

RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies

Jul 12, 2019
Figure 1 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 2 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 3 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 4 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Viaarxiv icon

Long-Range Indoor Navigation with PRM-RL

Feb 25, 2019
Figure 1 for Long-Range Indoor Navigation with PRM-RL
Figure 2 for Long-Range Indoor Navigation with PRM-RL
Figure 3 for Long-Range Indoor Navigation with PRM-RL
Figure 4 for Long-Range Indoor Navigation with PRM-RL
Viaarxiv icon

Learning Navigation Behaviors End-to-End with AutoRL

Feb 01, 2019
Figure 1 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 2 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 3 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 4 for Learning Navigation Behaviors End-to-End with AutoRL
Viaarxiv icon

PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning

Nov 30, 2018
Figure 1 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 2 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 3 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 4 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Viaarxiv icon

Deep Neural Networks for Swept Volume Prediction Between Configurations

May 29, 2018
Figure 1 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Figure 2 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Figure 3 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Figure 4 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Viaarxiv icon