Alert button
Picture for Jeannette Bohg

Jeannette Bohg

Alert button

KITE: Keypoint-Conditioned Policies for Semantic Manipulation

Jun 29, 2023
Priya Sundaresan, Suneel Belkhale, Dorsa Sadigh, Jeannette Bohg

Figure 1 for KITE: Keypoint-Conditioned Policies for Semantic Manipulation
Figure 2 for KITE: Keypoint-Conditioned Policies for Semantic Manipulation
Figure 3 for KITE: Keypoint-Conditioned Policies for Semantic Manipulation
Figure 4 for KITE: Keypoint-Conditioned Policies for Semantic Manipulation
Viaarxiv icon

The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects

Jun 01, 2023
Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu Li, Li Fei-Fei, Jiajun Wu

Figure 1 for The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects
Figure 2 for The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects
Figure 3 for The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects
Figure 4 for The ObjectFolder Benchmark: Multisensory Learning with Neural and Real Objects
Viaarxiv icon

TidyBot: Personalized Robot Assistance with Large Language Models

May 09, 2023
Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser

Figure 1 for TidyBot: Personalized Robot Assistance with Large Language Models
Figure 2 for TidyBot: Personalized Robot Assistance with Large Language Models
Figure 3 for TidyBot: Personalized Robot Assistance with Large Language Models
Figure 4 for TidyBot: Personalized Robot Assistance with Large Language Models
Viaarxiv icon

CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects

Mar 28, 2023
Nick Heppert, Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Rares Andrei Ambrus, Jeannette Bohg, Abhinav Valada, Thomas Kollar

Figure 1 for CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
Figure 2 for CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
Figure 3 for CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
Figure 4 for CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
Viaarxiv icon

Text2Motion: From Natural Language Instructions to Feasible Plans

Mar 21, 2023
Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, Jeannette Bohg

Figure 1 for Text2Motion: From Natural Language Instructions to Feasible Plans
Figure 2 for Text2Motion: From Natural Language Instructions to Feasible Plans
Figure 3 for Text2Motion: From Natural Language Instructions to Feasible Plans
Figure 4 for Text2Motion: From Natural Language Instructions to Feasible Plans
Viaarxiv icon

Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering

Dec 27, 2022
Negin Heravi, Heather Culbertson, Allison M. Okamura, Jeannette Bohg

Figure 1 for Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering
Figure 2 for Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering
Figure 3 for Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering
Figure 4 for Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering
Viaarxiv icon

Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks

Nov 11, 2022
Kuan Fang, Toki Migimatsu, Ajay Mandlekar, Li Fei-Fei, Jeannette Bohg

Figure 1 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Figure 2 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Figure 3 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Figure 4 for Active Task Randomization: Learning Visuomotor Skills for Sequential Manipulation by Proposing Feasible and Novel Tasks
Viaarxiv icon

ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking

Nov 08, 2022
Tara Sadjadpour, Jie Li, Rares Ambrus, Jeannette Bohg

Figure 1 for ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking
Figure 2 for ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking
Figure 3 for ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking
Figure 4 for ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking
Viaarxiv icon

Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation

Nov 04, 2022
Mengxi Li, Rika Antonova, Dorsa Sadigh, Jeannette Bohg

Figure 1 for Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation
Figure 2 for Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation
Figure 3 for Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation
Figure 4 for Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation
Viaarxiv icon

Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing

Oct 28, 2022
Chaoyi Pan, Marion Lepert, Shenli Yuan, Rika Antonova, Jeannette Bohg

Figure 1 for Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing
Figure 2 for Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing
Figure 3 for Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing
Figure 4 for Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing
Viaarxiv icon