Alert button
Picture for Yu Ding

Yu Ding

Alert button

Artificial Intelligence/Operations Research Workshop 2 Report Out

Add code
Bookmark button
Alert button
Apr 10, 2023
John Dickerson, Bistra Dilkina, Yu Ding, Swati Gupta, Pascal Van Hentenryck, Sven Koenig, Ramayya Krishnan, Radhika Kulkarni, Catherine Gill, Haley Griffin, Maddy Hunter, Ann Schwartz

Figure 1 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Figure 2 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Figure 3 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Figure 4 for Artificial Intelligence/Operations Research Workshop 2 Report Out
Viaarxiv icon

TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles

Add code
Bookmark button
Alert button
Apr 01, 2023
Yifeng Ma, Suzhen Wang, Yu Ding, Bowen Ma, Tangjie Lv, Changjie Fan, Zhipeng Hu, Zhidong Deng, Xin Yu

Figure 1 for TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles
Figure 2 for TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles
Figure 3 for TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles
Figure 4 for TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles
Viaarxiv icon

Facial Affective Analysis based on MAE and Multi-modal Information for 5th ABAW Competition

Add code
Bookmark button
Alert button
Mar 20, 2023
Wei Zhang, Bowen Ma, Feng Qiu, Yu Ding

Figure 1 for Facial Affective Analysis based on MAE and Multi-modal Information for 5th ABAW Competition
Figure 2 for Facial Affective Analysis based on MAE and Multi-modal Information for 5th ABAW Competition
Figure 3 for Facial Affective Analysis based on MAE and Multi-modal Information for 5th ABAW Competition
Figure 4 for Facial Affective Analysis based on MAE and Multi-modal Information for 5th ABAW Competition
Viaarxiv icon

DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video

Add code
Bookmark button
Alert button
Mar 07, 2023
Zhimeng Zhang, Zhipeng Hu, Wenjin Deng, Changjie Fan, Tangjie Lv, Yu Ding

Figure 1 for DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video
Figure 2 for DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video
Figure 3 for DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video
Figure 4 for DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video
Viaarxiv icon

Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase

Add code
Bookmark button
Alert button
Mar 03, 2023
Lintao Wang, Kun Hu, Lei Bai, Yu Ding, Wanli Ouyang, Zhiyong Wang

Figure 1 for Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase
Figure 2 for Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase
Figure 3 for Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase
Figure 4 for Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase
Viaarxiv icon

Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement

Add code
Bookmark button
Alert button
Feb 18, 2023
Jinming Ma, Feng Wu, Yingfeng Chen, Xianpeng Ji, Yu Ding

Figure 1 for Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement
Figure 2 for Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement
Figure 3 for Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement
Figure 4 for Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement
Viaarxiv icon

StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles

Add code
Bookmark button
Alert button
Jan 03, 2023
Yifeng Ma, Suzhen Wang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Zhidong Deng, Xin Yu

Figure 1 for StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles
Figure 2 for StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles
Figure 3 for StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles
Figure 4 for StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles
Viaarxiv icon

InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis

Add code
Bookmark button
Alert button
Dec 20, 2022
Feng Qiu, Wanzeng Kong, Yu Ding

Figure 1 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Figure 2 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Figure 3 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Figure 4 for InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Viaarxiv icon

TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective

Add code
Bookmark button
Alert button
Dec 17, 2022
Pengfei Xi, Guifeng Wang, Zhipeng Hu, Yu Xiong, Mingming Gong, Wei Huang, Runze Wu, Yu Ding, Tangjie Lv, Changjie Fan, Xiangnan Feng

Figure 1 for TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective
Figure 2 for TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective
Figure 3 for TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective
Figure 4 for TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective
Viaarxiv icon

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis

Add code
Bookmark button
Alert button
Dec 16, 2022
Feng Qiu, Chengyang Xie, Yu Ding, Wanzeng Kong

Figure 1 for EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis
Figure 2 for EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis
Figure 3 for EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis
Viaarxiv icon