Picture for Shi-Xiong Zhang

Shi-Xiong Zhang

LibriheavyMix: A 20,000-Hour Dataset for Single-Channel Reverberant Multi-Talker Speech Separation, ASR and Speaker Diarization

Add code
Sep 01, 2024
Figure 1 for LibriheavyMix: A 20,000-Hour Dataset for Single-Channel Reverberant Multi-Talker Speech Separation, ASR and Speaker Diarization
Figure 2 for LibriheavyMix: A 20,000-Hour Dataset for Single-Channel Reverberant Multi-Talker Speech Separation, ASR and Speaker Diarization
Figure 3 for LibriheavyMix: A 20,000-Hour Dataset for Single-Channel Reverberant Multi-Talker Speech Separation, ASR and Speaker Diarization
Figure 4 for LibriheavyMix: A 20,000-Hour Dataset for Single-Channel Reverberant Multi-Talker Speech Separation, ASR and Speaker Diarization
Viaarxiv icon

Comparing Discrete and Continuous Space LLMs for Speech Recognition

Add code
Sep 01, 2024
Figure 1 for Comparing Discrete and Continuous Space LLMs for Speech Recognition
Figure 2 for Comparing Discrete and Continuous Space LLMs for Speech Recognition
Figure 3 for Comparing Discrete and Continuous Space LLMs for Speech Recognition
Figure 4 for Comparing Discrete and Continuous Space LLMs for Speech Recognition
Viaarxiv icon

Advancing Multi-talker ASR Performance with Large Language Models

Add code
Aug 30, 2024
Figure 1 for Advancing Multi-talker ASR Performance with Large Language Models
Figure 2 for Advancing Multi-talker ASR Performance with Large Language Models
Figure 3 for Advancing Multi-talker ASR Performance with Large Language Models
Figure 4 for Advancing Multi-talker ASR Performance with Large Language Models
Viaarxiv icon

Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment

Add code
Jun 17, 2024
Figure 1 for Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment
Figure 2 for Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment
Figure 3 for Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment
Figure 4 for Multi-Channel Multi-Speaker ASR Using Target Speaker's Solo Segment
Viaarxiv icon

RIR-SF: Room Impulse Response Based Spatial Feature for Multi-channel Multi-talker ASR

Add code
Oct 31, 2023
Figure 1 for RIR-SF: Room Impulse Response Based Spatial Feature for Multi-channel Multi-talker ASR
Figure 2 for RIR-SF: Room Impulse Response Based Spatial Feature for Multi-channel Multi-talker ASR
Figure 3 for RIR-SF: Room Impulse Response Based Spatial Feature for Multi-channel Multi-talker ASR
Figure 4 for RIR-SF: Room Impulse Response Based Spatial Feature for Multi-channel Multi-talker ASR
Viaarxiv icon

UniX-Encoder: A Universal $X$-Channel Speech Encoder for Ad-Hoc Microphone Array Speech Processing

Add code
Oct 25, 2023
Figure 1 for UniX-Encoder: A Universal $X$-Channel Speech Encoder for Ad-Hoc Microphone Array Speech Processing
Figure 2 for UniX-Encoder: A Universal $X$-Channel Speech Encoder for Ad-Hoc Microphone Array Speech Processing
Figure 3 for UniX-Encoder: A Universal $X$-Channel Speech Encoder for Ad-Hoc Microphone Array Speech Processing
Figure 4 for UniX-Encoder: A Universal $X$-Channel Speech Encoder for Ad-Hoc Microphone Array Speech Processing
Viaarxiv icon

M3-AUDIODEC: Multi-channel multi-speaker multi-spatial audio codec

Add code
Sep 23, 2023
Viaarxiv icon

MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning

Add code
Mar 11, 2023
Viaarxiv icon

3D Neural Beamforming for Multi-channel Speech Separation Against Location Uncertainty

Add code
Feb 27, 2023
Figure 1 for 3D Neural Beamforming for Multi-channel Speech Separation Against Location Uncertainty
Figure 2 for 3D Neural Beamforming for Multi-channel Speech Separation Against Location Uncertainty
Figure 3 for 3D Neural Beamforming for Multi-channel Speech Separation Against Location Uncertainty
Figure 4 for 3D Neural Beamforming for Multi-channel Speech Separation Against Location Uncertainty
Viaarxiv icon

Towards Unified All-Neural Beamforming for Time and Frequency Domain Speech Separation

Add code
Dec 24, 2022
Viaarxiv icon