Picture for Shubham Sonawani

Shubham Sonawani

Interactive Robotics Laboratory, Arizona State University, Tempe, AZ, 85281, USA

iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch

Add code
Mar 11, 2024
Figure 1 for iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch
Figure 2 for iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch
Figure 3 for iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch
Figure 4 for iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Projecting Robot Intentions Through Visual Cues: Static vs. Dynamic Signaling

Add code
Aug 19, 2023
Figure 1 for Projecting Robot Intentions Through Visual Cues: Static vs. Dynamic Signaling
Figure 2 for Projecting Robot Intentions Through Visual Cues: Static vs. Dynamic Signaling
Figure 3 for Projecting Robot Intentions Through Visual Cues: Static vs. Dynamic Signaling
Figure 4 for Projecting Robot Intentions Through Visual Cues: Static vs. Dynamic Signaling
Viaarxiv icon

Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation

Add code
Jun 22, 2023
Figure 1 for Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation
Figure 2 for Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation
Figure 3 for Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation
Figure 4 for Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation
Viaarxiv icon

Imitation Learning based Auto-Correction of Extrinsic Parameters for A Mixed-Reality Setup

Add code
Dec 16, 2022
Figure 1 for Imitation Learning based Auto-Correction of Extrinsic Parameters for A Mixed-Reality Setup
Viaarxiv icon

Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation

Add code
Dec 08, 2022
Figure 1 for Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation
Figure 2 for Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation
Figure 3 for Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation
Figure 4 for Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation
Viaarxiv icon

Multimodal Data Fusion for Power-On-and-GoRobotic Systems in Retail

Add code
Mar 23, 2021
Figure 1 for Multimodal Data Fusion for Power-On-and-GoRobotic Systems in Retail
Figure 2 for Multimodal Data Fusion for Power-On-and-GoRobotic Systems in Retail
Figure 3 for Multimodal Data Fusion for Power-On-and-GoRobotic Systems in Retail
Figure 4 for Multimodal Data Fusion for Power-On-and-GoRobotic Systems in Retail
Viaarxiv icon

Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks

Add code
Feb 19, 2020
Figure 1 for Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks
Figure 2 for Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks
Figure 3 for Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks
Figure 4 for Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks
Viaarxiv icon