Picture for Elizabeth Croft

Elizabeth Croft

Monash University

Learning to Communicate Functional States with Nonverbal Expressions for Improved Human-Robot Collaboration

Add code
Apr 30, 2024
Viaarxiv icon

How Can Everyday Users Efficiently Teach Robots by Demonstrations?

Add code
Oct 19, 2023
Figure 1 for How Can Everyday Users Efficiently Teach Robots by Demonstrations?
Figure 2 for How Can Everyday Users Efficiently Teach Robots by Demonstrations?
Figure 3 for How Can Everyday Users Efficiently Teach Robots by Demonstrations?
Figure 4 for How Can Everyday Users Efficiently Teach Robots by Demonstrations?
Viaarxiv icon

Comparing Subjective Perceptions of Robot-to-Human Handover Trajectories

Add code
Nov 16, 2022
Figure 1 for Comparing Subjective Perceptions of Robot-to-Human Handover Trajectories
Figure 2 for Comparing Subjective Perceptions of Robot-to-Human Handover Trajectories
Figure 3 for Comparing Subjective Perceptions of Robot-to-Human Handover Trajectories
Figure 4 for Comparing Subjective Perceptions of Robot-to-Human Handover Trajectories
Viaarxiv icon

Autonomous social robot navigation in unknown urban environments using semantic segmentation

Add code
Aug 25, 2022
Figure 1 for Autonomous social robot navigation in unknown urban environments using semantic segmentation
Figure 2 for Autonomous social robot navigation in unknown urban environments using semantic segmentation
Figure 3 for Autonomous social robot navigation in unknown urban environments using semantic segmentation
Figure 4 for Autonomous social robot navigation in unknown urban environments using semantic segmentation
Viaarxiv icon

Design and Implementation of a Human-Robot Joint Action Framework using Augmented Reality and Eye Gaze

Add code
Aug 25, 2022
Figure 1 for Design and Implementation of a Human-Robot Joint Action Framework using Augmented Reality and Eye Gaze
Figure 2 for Design and Implementation of a Human-Robot Joint Action Framework using Augmented Reality and Eye Gaze
Figure 3 for Design and Implementation of a Human-Robot Joint Action Framework using Augmented Reality and Eye Gaze
Figure 4 for Design and Implementation of a Human-Robot Joint Action Framework using Augmented Reality and Eye Gaze
Viaarxiv icon

AR Point&Click: An Interface for Setting Robot Navigation Goals

Add code
Mar 29, 2022
Figure 1 for AR Point&Click: An Interface for Setting Robot Navigation Goals
Figure 2 for AR Point&Click: An Interface for Setting Robot Navigation Goals
Figure 3 for AR Point&Click: An Interface for Setting Robot Navigation Goals
Figure 4 for AR Point&Click: An Interface for Setting Robot Navigation Goals
Viaarxiv icon

On-The-Go Robot-to-Human Handovers with a Mobile Manipulator

Add code
Mar 16, 2022
Figure 1 for On-The-Go Robot-to-Human Handovers with a Mobile Manipulator
Figure 2 for On-The-Go Robot-to-Human Handovers with a Mobile Manipulator
Figure 3 for On-The-Go Robot-to-Human Handovers with a Mobile Manipulator
Figure 4 for On-The-Go Robot-to-Human Handovers with a Mobile Manipulator
Viaarxiv icon

Design and Evaluation of an Augmented Reality Head-Mounted Display Interface for Human Robot Teams Collaborating in Physically Shared Manufacturing Tasks

Add code
Mar 16, 2022
Figure 1 for Design and Evaluation of an Augmented Reality Head-Mounted Display Interface for Human Robot Teams Collaborating in Physically Shared Manufacturing Tasks
Figure 2 for Design and Evaluation of an Augmented Reality Head-Mounted Display Interface for Human Robot Teams Collaborating in Physically Shared Manufacturing Tasks
Figure 3 for Design and Evaluation of an Augmented Reality Head-Mounted Display Interface for Human Robot Teams Collaborating in Physically Shared Manufacturing Tasks
Figure 4 for Design and Evaluation of an Augmented Reality Head-Mounted Display Interface for Human Robot Teams Collaborating in Physically Shared Manufacturing Tasks
Viaarxiv icon

Metrics for Evaluating Social Conformity of Crowd Navigation Algorithms

Add code
Feb 02, 2022
Figure 1 for Metrics for Evaluating Social Conformity of Crowd Navigation Algorithms
Figure 2 for Metrics for Evaluating Social Conformity of Crowd Navigation Algorithms
Figure 3 for Metrics for Evaluating Social Conformity of Crowd Navigation Algorithms
Figure 4 for Metrics for Evaluating Social Conformity of Crowd Navigation Algorithms
Viaarxiv icon

An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers

Add code
Aug 29, 2021
Figure 1 for An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers
Figure 2 for An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers
Figure 3 for An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers
Figure 4 for An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers
Viaarxiv icon