Picture for Han Liang

Han Liang

Seedance 1.5 pro: A Native Audio-Visual Joint Generation Foundation Model

Add code
Dec 23, 2025
Viaarxiv icon

InterAgent: Physics-based Multi-agent Command Execution via Diffusion on Interaction Graphs

Add code
Dec 12, 2025
Viaarxiv icon

STADI: Fine-Grained Step-Patch Diffusion Parallelism for Heterogeneous GPUs

Add code
Sep 05, 2025
Viaarxiv icon

OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation

Add code
Aug 26, 2025
Figure 1 for OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
Figure 2 for OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
Figure 3 for OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
Figure 4 for OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
Viaarxiv icon

LLaVA-SLT: Visual Language Tuning for Sign Language Translation

Add code
Dec 21, 2024
Figure 1 for LLaVA-SLT: Visual Language Tuning for Sign Language Translation
Figure 2 for LLaVA-SLT: Visual Language Tuning for Sign Language Translation
Figure 3 for LLaVA-SLT: Visual Language Tuning for Sign Language Translation
Figure 4 for LLaVA-SLT: Visual Language Tuning for Sign Language Translation
Viaarxiv icon

FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients

Add code
Nov 04, 2024
Figure 1 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Figure 2 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Figure 3 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Figure 4 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Viaarxiv icon

Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance

Add code
Jan 30, 2024
Figure 1 for Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
Figure 2 for Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
Figure 3 for Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
Figure 4 for Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
Viaarxiv icon

OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers

Add code
Dec 18, 2023
Figure 1 for OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
Figure 2 for OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
Figure 3 for OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
Figure 4 for OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
Viaarxiv icon

InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions

Add code
Apr 12, 2023
Viaarxiv icon

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

Add code
May 30, 2022
Figure 1 for LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors
Figure 2 for LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors
Figure 3 for LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors
Figure 4 for LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors
Viaarxiv icon