Picture for Zongqing Lu

Zongqing Lu

ODRL: A Benchmark for Off-Dynamics Reinforcement Learning

Add code
Oct 28, 2024
Figure 1 for ODRL: A Benchmark for Off-Dynamics Reinforcement Learning
Figure 2 for ODRL: A Benchmark for Off-Dynamics Reinforcement Learning
Figure 3 for ODRL: A Benchmark for Off-Dynamics Reinforcement Learning
Figure 4 for ODRL: A Benchmark for Off-Dynamics Reinforcement Learning
Viaarxiv icon

MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents

Add code
Oct 04, 2024
Figure 1 for MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents
Figure 2 for MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents
Figure 3 for MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents
Figure 4 for MLLM as Retriever: Interactively Learning Multimodal Retrieval for Embodied Agents
Viaarxiv icon

SELU: Self-Learning Embodied MLLMs in Unknown Environments

Add code
Oct 04, 2024
Figure 1 for SELU: Self-Learning Embodied MLLMs in Unknown Environments
Figure 2 for SELU: Self-Learning Embodied MLLMs in Unknown Environments
Figure 3 for SELU: Self-Learning Embodied MLLMs in Unknown Environments
Figure 4 for SELU: Self-Learning Embodied MLLMs in Unknown Environments
Viaarxiv icon

Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models

Add code
Oct 04, 2024
Figure 1 for Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models
Figure 2 for Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models
Figure 3 for Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models
Figure 4 for Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models
Viaarxiv icon

Cross-Embodiment Dexterous Grasping with Reinforcement Learning

Add code
Oct 03, 2024
Figure 1 for Cross-Embodiment Dexterous Grasping with Reinforcement Learning
Figure 2 for Cross-Embodiment Dexterous Grasping with Reinforcement Learning
Figure 3 for Cross-Embodiment Dexterous Grasping with Reinforcement Learning
Figure 4 for Cross-Embodiment Dexterous Grasping with Reinforcement Learning
Viaarxiv icon

From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities

Add code
Oct 03, 2024
Figure 1 for From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities
Figure 2 for From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities
Figure 3 for From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities
Figure 4 for From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities
Viaarxiv icon

Learning Diverse Bimanual Dexterous Manipulation Skills from Human Demonstrations

Add code
Oct 03, 2024
Viaarxiv icon

Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping

Add code
Oct 03, 2024
Figure 1 for Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping
Figure 2 for Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping
Figure 3 for Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping
Figure 4 for Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping
Viaarxiv icon

Egocentric Vision Language Planning

Add code
Aug 11, 2024
Viaarxiv icon

Visual Grounding for Object-Level Generalization in Reinforcement Learning

Add code
Aug 04, 2024
Figure 1 for Visual Grounding for Object-Level Generalization in Reinforcement Learning
Figure 2 for Visual Grounding for Object-Level Generalization in Reinforcement Learning
Figure 3 for Visual Grounding for Object-Level Generalization in Reinforcement Learning
Figure 4 for Visual Grounding for Object-Level Generalization in Reinforcement Learning
Viaarxiv icon