Picture for John R. Hershey

John R. Hershey

Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language

Jun 09, 2024
Viaarxiv icon

Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge

Add code
Feb 02, 2024
Viaarxiv icon

TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition

Add code
Aug 21, 2023
Figure 1 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Figure 2 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Figure 3 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Viaarxiv icon

The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement

Jul 07, 2023
Figure 1 for The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement
Figure 2 for The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement
Figure 3 for The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement
Viaarxiv icon

Unsupervised Multi-channel Separation and Adaptation

Add code
May 18, 2023
Figure 1 for Unsupervised Multi-channel Separation and Adaptation
Figure 2 for Unsupervised Multi-channel Separation and Adaptation
Figure 3 for Unsupervised Multi-channel Separation and Adaptation
Viaarxiv icon

AudioSlots: A slot-centric generative model for audio separation

May 09, 2023
Figure 1 for AudioSlots: A slot-centric generative model for audio separation
Figure 2 for AudioSlots: A slot-centric generative model for audio separation
Figure 3 for AudioSlots: A slot-centric generative model for audio separation
Viaarxiv icon

AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation

Add code
Jul 20, 2022
Figure 1 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 2 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 3 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 4 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Viaarxiv icon

Distance-Based Sound Separation

Add code
Jul 01, 2022
Figure 1 for Distance-Based Sound Separation
Figure 2 for Distance-Based Sound Separation
Figure 3 for Distance-Based Sound Separation
Figure 4 for Distance-Based Sound Separation
Viaarxiv icon

CycleGAN-Based Unpaired Speech Dereverberation

Add code
Mar 29, 2022
Figure 1 for CycleGAN-Based Unpaired Speech Dereverberation
Figure 2 for CycleGAN-Based Unpaired Speech Dereverberation
Figure 3 for CycleGAN-Based Unpaired Speech Dereverberation
Viaarxiv icon

Adapting Speech Separation to Real-World Meetings Using Mixture Invariant Training

Add code
Oct 20, 2021
Figure 1 for Adapting Speech Separation to Real-World Meetings Using Mixture Invariant Training
Figure 2 for Adapting Speech Separation to Real-World Meetings Using Mixture Invariant Training
Viaarxiv icon