Picture for Zhixian Zhao

Zhixian Zhao

Towards Fine-Grained Multi-Dimensional Speech Understanding: Data Pipeline, Benchmark, and Model

Add code
May 12, 2026
Viaarxiv icon

Full-Duplex Interaction in Spoken Dialogue Systems: A Comprehensive Study from the ICASSP 2026 HumDial Challenge

Add code
Apr 23, 2026
Viaarxiv icon

HumDial-EIBench: A Human-Recorded Multi-Turn Emotional Intelligence Benchmark for Audio Language Models

Add code
Apr 13, 2026
Viaarxiv icon

EmoOmni: Bridging Emotional Understanding and Expression in Omni-Modal LLMs

Add code
Feb 25, 2026
Viaarxiv icon

Integrating Fine-Grained Audio-Visual Evidence for Robust Multimodal Emotion Reasoning

Add code
Jan 26, 2026
Viaarxiv icon

dLLM-ASR: A Faster Diffusion LLM-based Framework for Speech Recognition

Add code
Jan 25, 2026
Viaarxiv icon

Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens

Add code
Mar 03, 2025
Figure 1 for Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Figure 2 for Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Figure 3 for Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Figure 4 for Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Viaarxiv icon

Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought

Add code
Feb 25, 2025
Figure 1 for Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought
Figure 2 for Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought
Figure 3 for Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought
Figure 4 for Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought
Viaarxiv icon

Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment

Add code
Sep 10, 2024
Figure 1 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Figure 2 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Figure 3 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Figure 4 for Improving Multimodal Emotion Recognition by Leveraging Acoustic Adaptation and Visual Alignment
Viaarxiv icon