Picture for Wei Liang

Wei Liang

Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance

Add code
Mar 26, 2024
Figure 1 for Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance
Figure 2 for Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance
Figure 3 for Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance
Figure 4 for Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance
Viaarxiv icon

Let Storytelling Tell Vivid Stories: An Expressive and Fluent Multimodal Storyteller

Mar 12, 2024
Figure 1 for Let Storytelling Tell Vivid Stories: An Expressive and Fluent Multimodal Storyteller
Figure 2 for Let Storytelling Tell Vivid Stories: An Expressive and Fluent Multimodal Storyteller
Figure 3 for Let Storytelling Tell Vivid Stories: An Expressive and Fluent Multimodal Storyteller
Figure 4 for Let Storytelling Tell Vivid Stories: An Expressive and Fluent Multimodal Storyteller
Viaarxiv icon

Language-driven All-in-one Adverse Weather Removal

Dec 03, 2023
Viaarxiv icon

Active Reasoning in an Open-World Environment

Nov 03, 2023
Viaarxiv icon

DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation

Add code
Aug 14, 2023
Figure 1 for DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation
Figure 2 for DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation
Figure 3 for DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation
Figure 4 for DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation
Viaarxiv icon

MEWL: Few-shot multimodal word learning with referential uncertainty

Add code
Jun 01, 2023
Figure 1 for MEWL: Few-shot multimodal word learning with referential uncertainty
Figure 2 for MEWL: Few-shot multimodal word learning with referential uncertainty
Figure 3 for MEWL: Few-shot multimodal word learning with referential uncertainty
Figure 4 for MEWL: Few-shot multimodal word learning with referential uncertainty
Viaarxiv icon

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

Apr 06, 2023
Figure 1 for Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding
Figure 2 for Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding
Figure 3 for Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding
Figure 4 for Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding
Viaarxiv icon

Diffusion-based Generation, Optimization, and Planning in 3D Scenes

Add code
Jan 15, 2023
Figure 1 for Diffusion-based Generation, Optimization, and Planning in 3D Scenes
Figure 2 for Diffusion-based Generation, Optimization, and Planning in 3D Scenes
Figure 3 for Diffusion-based Generation, Optimization, and Planning in 3D Scenes
Figure 4 for Diffusion-based Generation, Optimization, and Planning in 3D Scenes
Viaarxiv icon

The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge

Add code
Jan 12, 2023
Figure 1 for The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge
Figure 2 for The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge
Figure 3 for The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge
Figure 4 for The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge
Viaarxiv icon

Towards Versatile Embodied Navigation

Add code
Oct 30, 2022
Figure 1 for Towards Versatile Embodied Navigation
Figure 2 for Towards Versatile Embodied Navigation
Figure 3 for Towards Versatile Embodied Navigation
Figure 4 for Towards Versatile Embodied Navigation
Viaarxiv icon