Picture for Zhou Xian

Zhou Xian

Thin-Shell Object Manipulations With Differentiable Physics Simulations

Add code
Mar 30, 2024
Viaarxiv icon

DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation

Add code
Mar 13, 2024
Figure 1 for DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation
Figure 2 for DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation
Figure 3 for DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation
Figure 4 for DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation
Viaarxiv icon

RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback

Add code
Feb 10, 2024
Figure 1 for RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
Figure 2 for RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
Figure 3 for RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
Figure 4 for RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
Viaarxiv icon

RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation

Add code
Nov 13, 2023
Viaarxiv icon

Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models

Add code
Oct 27, 2023
Viaarxiv icon

Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation

Add code
Jun 30, 2023
Figure 1 for Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation
Figure 2 for Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation
Figure 3 for Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation
Figure 4 for Act3D: Infinite Resolution Action Detection Transformer for Robotic Manipulation
Viaarxiv icon

Towards A Foundation Model for Generalist Robots: Diverse Skill Learning at Scale via Automated Task and Scene Generation

Add code
May 17, 2023
Figure 1 for Towards A Foundation Model for Generalist Robots: Diverse Skill Learning at Scale via Automated Task and Scene Generation
Figure 2 for Towards A Foundation Model for Generalist Robots: Diverse Skill Learning at Scale via Automated Task and Scene Generation
Viaarxiv icon

Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement

Add code
May 06, 2023
Figure 1 for Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement
Figure 2 for Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement
Figure 3 for Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement
Figure 4 for Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement
Viaarxiv icon

SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments

Add code
Mar 16, 2023
Figure 1 for SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments
Figure 2 for SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments
Figure 3 for SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments
Figure 4 for SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse Environments
Viaarxiv icon

FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation

Add code
Mar 04, 2023
Figure 1 for FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation
Figure 2 for FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation
Figure 3 for FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation
Figure 4 for FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation
Viaarxiv icon