Picture for Xiaodan Liang

Xiaodan Liang

EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions

Add code
Sep 26, 2024
Figure 1 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 2 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 3 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 4 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Viaarxiv icon

Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models

Add code
Sep 11, 2024
Viaarxiv icon

Qihoo-T2X: An Efficiency-Focused Diffusion Transformer via Proxy Tokens for Text-to-Any-Task

Add code
Sep 06, 2024
Viaarxiv icon

Making Large Language Models Better Planners with Reasoning-Decision Alignment

Add code
Aug 25, 2024
Viaarxiv icon

EasyControl: Transfer ControlNet to Video Diffusion for Controllable Generation and Interpolation

Add code
Aug 23, 2024
Viaarxiv icon

GarmentAligner: Text-to-Garment Generation via Retrieval-augmented Multi-level Corrections

Add code
Aug 23, 2024
Viaarxiv icon

MUSE: Mamba is Efficient Multi-scale Learner for Text-video Retrieval

Add code
Aug 20, 2024
Viaarxiv icon

All Robots in One: A New Standard and Unified Dataset for Versatile, General-Purpose Embodied Agents

Add code
Aug 20, 2024
Viaarxiv icon

FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance

Add code
Aug 15, 2024
Viaarxiv icon

APTNESS: Incorporating Appraisal Theory and Emotion Support Strategies for Empathetic Response Generation

Add code
Jul 23, 2024
Viaarxiv icon