Picture for Tianhang Yu

Tianhang Yu

SoulX-FlashTalk: Real-Time Infinite Streaming of Audio-Driven Avatars via Self-Correcting Bidirectional Distillation

Add code
Jan 06, 2026
Viaarxiv icon

SoulX-LiveTalk: Real-Time Infinite Streaming of Audio-Driven Avatars via Self-Correcting Bidirectional Distillation

Add code
Dec 31, 2025
Viaarxiv icon

Radiomap Inpainting for Restricted Areas based on Propagation Priority and Depth Map

Add code
May 24, 2023
Figure 1 for Radiomap Inpainting for Restricted Areas based on Propagation Priority and Depth Map
Figure 2 for Radiomap Inpainting for Restricted Areas based on Propagation Priority and Depth Map
Figure 3 for Radiomap Inpainting for Restricted Areas based on Propagation Priority and Depth Map
Figure 4 for Radiomap Inpainting for Restricted Areas based on Propagation Priority and Depth Map
Viaarxiv icon

Exemplar-Based Radio Map Reconstruction of Missing Areas Using Propagation Priority

Add code
Sep 10, 2022
Figure 1 for Exemplar-Based Radio Map Reconstruction of Missing Areas Using Propagation Priority
Figure 2 for Exemplar-Based Radio Map Reconstruction of Missing Areas Using Propagation Priority
Figure 3 for Exemplar-Based Radio Map Reconstruction of Missing Areas Using Propagation Priority
Figure 4 for Exemplar-Based Radio Map Reconstruction of Missing Areas Using Propagation Priority
Viaarxiv icon

INT8 Winograd Acceleration for Conv1D Equipped ASR Models Deployed on Mobile Devices

Add code
Oct 28, 2020
Figure 1 for INT8 Winograd Acceleration for Conv1D Equipped ASR Models Deployed on Mobile Devices
Figure 2 for INT8 Winograd Acceleration for Conv1D Equipped ASR Models Deployed on Mobile Devices
Figure 3 for INT8 Winograd Acceleration for Conv1D Equipped ASR Models Deployed on Mobile Devices
Figure 4 for INT8 Winograd Acceleration for Conv1D Equipped ASR Models Deployed on Mobile Devices
Viaarxiv icon

MNN: A Universal and Efficient Inference Engine

Add code
Feb 27, 2020
Figure 1 for MNN: A Universal and Efficient Inference Engine
Figure 2 for MNN: A Universal and Efficient Inference Engine
Figure 3 for MNN: A Universal and Efficient Inference Engine
Figure 4 for MNN: A Universal and Efficient Inference Engine
Viaarxiv icon

Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning

Add code
Nov 13, 2019
Figure 1 for Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
Figure 2 for Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
Figure 3 for Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
Figure 4 for Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
Viaarxiv icon