Picture for Shiqi Zhang

Shiqi Zhang

Unrestricted Global Phase Bias-Aware Single-channel Speech Enhancement with Conformer-based Metric GAN

Add code
Feb 13, 2024
Viaarxiv icon

ORLA*: Mobile Manipulator-Based Object Rearrangement with Lazy A*

Add code
Sep 24, 2023
Figure 1 for ORLA*: Mobile Manipulator-Based Object Rearrangement with Lazy A*
Figure 2 for ORLA*: Mobile Manipulator-Based Object Rearrangement with Lazy A*
Figure 3 for ORLA*: Mobile Manipulator-Based Object Rearrangement with Lazy A*
Figure 4 for ORLA*: Mobile Manipulator-Based Object Rearrangement with Lazy A*
Viaarxiv icon

Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control

Add code
Sep 08, 2023
Figure 1 for Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
Figure 2 for Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
Figure 3 for Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
Figure 4 for Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
Viaarxiv icon

Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning

Add code
Jul 21, 2023
Figure 1 for Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning
Figure 2 for Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning
Figure 3 for Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning
Figure 4 for Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning
Viaarxiv icon

Integrating Action Knowledge and LLMs for Task Planning and Situation Handling in Open Worlds

Add code
May 27, 2023
Viaarxiv icon

ARDIE: AR, Dialogue, and Eye Gaze Policies for Human-Robot Collaboration

Add code
May 08, 2023
Figure 1 for ARDIE: AR, Dialogue, and Eye Gaze Policies for Human-Robot Collaboration
Figure 2 for ARDIE: AR, Dialogue, and Eye Gaze Policies for Human-Robot Collaboration
Figure 3 for ARDIE: AR, Dialogue, and Eye Gaze Policies for Human-Robot Collaboration
Viaarxiv icon

LLM+P: Empowering Large Language Models with Optimal Planning Proficiency

Add code
May 05, 2023
Viaarxiv icon

Grounding Classical Task Planners via Vision-Language Models

Add code
Apr 17, 2023
Figure 1 for Grounding Classical Task Planners via Vision-Language Models
Figure 2 for Grounding Classical Task Planners via Vision-Language Models
Figure 3 for Grounding Classical Task Planners via Vision-Language Models
Figure 4 for Grounding Classical Task Planners via Vision-Language Models
Viaarxiv icon

Task and Motion Planning with Large Language Models for Object Rearrangement

Add code
Mar 14, 2023
Figure 1 for Task and Motion Planning with Large Language Models for Object Rearrangement
Figure 2 for Task and Motion Planning with Large Language Models for Object Rearrangement
Figure 3 for Task and Motion Planning with Large Language Models for Object Rearrangement
Figure 4 for Task and Motion Planning with Large Language Models for Object Rearrangement
Viaarxiv icon

Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration

Add code
Nov 13, 2022
Viaarxiv icon