Alert button
Picture for Weiyu Liu

Weiyu Liu

Alert button

Foundation Models in Robotics: Applications, Challenges, and the Future

Add code
Bookmark button
Alert button
Dec 13, 2023
Roya Firoozi, Johnathan Tucker, Stephen Tian, Anirudha Majumdar, Jiankai Sun, Weiyu Liu, Yuke Zhu, Shuran Song, Ashish Kapoor, Karol Hausman, Brian Ichter, Danny Driess, Jiajun Wu, Cewu Lu, Mac Schwager

Figure 1 for Foundation Models in Robotics: Applications, Challenges, and the Future
Figure 2 for Foundation Models in Robotics: Applications, Challenges, and the Future
Figure 3 for Foundation Models in Robotics: Applications, Challenges, and the Future
Viaarxiv icon

GraspGPT: Leveraging Semantic Knowledge from a Large Language Model for Task-Oriented Grasping

Add code
Bookmark button
Alert button
Jul 30, 2023
Chao Tang, Dehao Huang, Wenqi Ge, Weiyu Liu, Hong Zhang

Figure 1 for GraspGPT: Leveraging Semantic Knowledge from a Large Language Model for Task-Oriented Grasping
Figure 2 for GraspGPT: Leveraging Semantic Knowledge from a Large Language Model for Task-Oriented Grasping
Figure 3 for GraspGPT: Leveraging Semantic Knowledge from a Large Language Model for Task-Oriented Grasping
Figure 4 for GraspGPT: Leveraging Semantic Knowledge from a Large Language Model for Task-Oriented Grasping
Viaarxiv icon

Latent Space Planning for Multi-Object Manipulation with Environment-Aware Relational Classifiers

Add code
Bookmark button
Alert button
May 18, 2023
Yixuan Huang, Nichols Crawford Taylor, Adam Conkey, Weiyu Liu, Tucker Hermans

Figure 1 for Latent Space Planning for Multi-Object Manipulation with Environment-Aware Relational Classifiers
Figure 2 for Latent Space Planning for Multi-Object Manipulation with Environment-Aware Relational Classifiers
Figure 3 for Latent Space Planning for Multi-Object Manipulation with Environment-Aware Relational Classifiers
Figure 4 for Latent Space Planning for Multi-Object Manipulation with Environment-Aware Relational Classifiers
Viaarxiv icon

Task-Oriented Grasp Prediction with Visual-Language Inputs

Add code
Bookmark button
Alert button
Feb 28, 2023
Chao Tang, Dehao Huang, Lingxiao Meng, Weiyu Liu, Hong Zhang

Figure 1 for Task-Oriented Grasp Prediction with Visual-Language Inputs
Figure 2 for Task-Oriented Grasp Prediction with Visual-Language Inputs
Figure 3 for Task-Oriented Grasp Prediction with Visual-Language Inputs
Figure 4 for Task-Oriented Grasp Prediction with Visual-Language Inputs
Viaarxiv icon

StructDiffusion: Object-Centric Diffusion for Semantic Rearrangement of Novel Objects

Add code
Bookmark button
Alert button
Nov 08, 2022
Weiyu Liu, Tucker Hermans, Sonia Chernova, Chris Paxton

Figure 1 for StructDiffusion: Object-Centric Diffusion for Semantic Rearrangement of Novel Objects
Figure 2 for StructDiffusion: Object-Centric Diffusion for Semantic Rearrangement of Novel Objects
Figure 3 for StructDiffusion: Object-Centric Diffusion for Semantic Rearrangement of Novel Objects
Figure 4 for StructDiffusion: Object-Centric Diffusion for Semantic Rearrangement of Novel Objects
Viaarxiv icon

StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects

Add code
Bookmark button
Alert button
Oct 19, 2021
Weiyu Liu, Chris Paxton, Tucker Hermans, Dieter Fox

Figure 1 for StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects
Figure 2 for StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects
Figure 3 for StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects
Figure 4 for StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects
Viaarxiv icon

Towards Robust One-shot Task Execution using Knowledge Graph Embeddings

Add code
Bookmark button
Alert button
May 10, 2021
Angel Daruna, Lakshmi Nair, Weiyu Liu, Sonia Chernova

Figure 1 for Towards Robust One-shot Task Execution using Knowledge Graph Embeddings
Figure 2 for Towards Robust One-shot Task Execution using Knowledge Graph Embeddings
Figure 3 for Towards Robust One-shot Task Execution using Knowledge Graph Embeddings
Figure 4 for Towards Robust One-shot Task Execution using Knowledge Graph Embeddings
Viaarxiv icon

Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping

Add code
Bookmark button
Alert button
Nov 13, 2020
Adithyavairavan Murali, Weiyu Liu, Kenneth Marino, Sonia Chernova, Abhinav Gupta

Figure 1 for Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
Figure 2 for Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
Figure 3 for Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
Figure 4 for Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
Viaarxiv icon

Taking Recoveries to Task: Recovery-Driven Development for Recipe-based Robot Tasks

Add code
Bookmark button
Alert button
Jan 28, 2020
Siddhartha Banerjee, Angel Daruna, David Kent, Weiyu Liu, Jonathan Balloch, Abhinav Jain, Akshay Krishnan, Muhammad Asif Rana, Harish Ravichandar, Binit Shah, Nithin Shrivatsav, Sonia Chernova

Figure 1 for Taking Recoveries to Task: Recovery-Driven Development for Recipe-based Robot Tasks
Figure 2 for Taking Recoveries to Task: Recovery-Driven Development for Recipe-based Robot Tasks
Figure 3 for Taking Recoveries to Task: Recovery-Driven Development for Recipe-based Robot Tasks
Figure 4 for Taking Recoveries to Task: Recovery-Driven Development for Recipe-based Robot Tasks
Viaarxiv icon