Alert button
Picture for Chengshi Zheng

Chengshi Zheng

Alert button

TaylorBeamformer: Learning All-Neural Multi-Channel Speech Enhancement from Taylor's Approximation Theory

Add code
Bookmark button
Alert button
Mar 14, 2022
Andong Li, Guochen Yu, Chengshi Zheng, Xiaodong Li

Figure 1 for TaylorBeamformer: Learning All-Neural Multi-Channel Speech Enhancement from Taylor's Approximation Theory
Figure 2 for TaylorBeamformer: Learning All-Neural Multi-Channel Speech Enhancement from Taylor's Approximation Theory
Figure 3 for TaylorBeamformer: Learning All-Neural Multi-Channel Speech Enhancement from Taylor's Approximation Theory
Viaarxiv icon

MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient

Add code
Bookmark button
Alert button
Mar 14, 2022
Andong Li, Chengshi Zheng, Ziyang Zhang, Xiaodong Li

Figure 1 for MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient
Figure 2 for MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient
Figure 3 for MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient
Figure 4 for MDNet: Learning Monaural Speech Enhancement from Deep Prior Gradient
Viaarxiv icon

DMF-Net: A decoupling-style multi-band fusion model for real-time full-band speech enhancement

Add code
Bookmark button
Alert button
Mar 02, 2022
Guochen Yu, Yuansheng Guan, Weixin Meng, Chengshi Zheng, Hui Wang

Figure 1 for DMF-Net: A decoupling-style multi-band fusion model for real-time full-band speech enhancement
Figure 2 for DMF-Net: A decoupling-style multi-band fusion model for real-time full-band speech enhancement
Figure 3 for DMF-Net: A decoupling-style multi-band fusion model for real-time full-band speech enhancement
Figure 4 for DMF-Net: A decoupling-style multi-band fusion model for real-time full-band speech enhancement
Viaarxiv icon

DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement

Add code
Bookmark button
Alert button
Feb 16, 2022
Guochen Yu, Andong Li, Hui Wang, Yutian Wang, Yuxuan Ke, Chengshi Zheng

Figure 1 for DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement
Figure 2 for DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement
Figure 3 for DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement
Figure 4 for DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement
Viaarxiv icon

Low-latency Monaural Speech Enhancement with Deep Filter-bank Equalizer

Add code
Bookmark button
Alert button
Feb 14, 2022
Chengshi Zheng, Wenzhe Liu, Andong Li, Yuxuan Ke, Xiaodong Li

Figure 1 for Low-latency Monaural Speech Enhancement with Deep Filter-bank Equalizer
Figure 2 for Low-latency Monaural Speech Enhancement with Deep Filter-bank Equalizer
Figure 3 for Low-latency Monaural Speech Enhancement with Deep Filter-bank Equalizer
Figure 4 for Low-latency Monaural Speech Enhancement with Deep Filter-bank Equalizer
Viaarxiv icon

A Neural Beam Filter for Real-time Multi-channel Speech Enhancement

Add code
Bookmark button
Alert button
Feb 05, 2022
Wenzhe Liu, Andong Li, Chengshi Zheng, Xiaodong Li

Figure 1 for A Neural Beam Filter for Real-time Multi-channel Speech Enhancement
Figure 2 for A Neural Beam Filter for Real-time Multi-channel Speech Enhancement
Figure 3 for A Neural Beam Filter for Real-time Multi-channel Speech Enhancement
Viaarxiv icon

A deep complex network with multi-frame filtering for stereophonic acoustic echo cancellation

Add code
Bookmark button
Alert button
Feb 03, 2022
Linjuan Cheng, Chengshi Zheng, Andong Li, Renhua Peng, Xiaodong Li

Figure 1 for A deep complex network with multi-frame filtering for stereophonic acoustic echo cancellation
Figure 2 for A deep complex network with multi-frame filtering for stereophonic acoustic echo cancellation
Figure 3 for A deep complex network with multi-frame filtering for stereophonic acoustic echo cancellation
Figure 4 for A deep complex network with multi-frame filtering for stereophonic acoustic echo cancellation
Viaarxiv icon

EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network

Add code
Bookmark button
Alert button
Dec 16, 2021
Kaitong Zheng, Ruijie Meng, Chengshi Zheng, Xiaodong Li, Jinqiu Sang, Juanjuan Cai, Jie Wang

Figure 1 for EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network
Figure 2 for EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network
Figure 3 for EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network
Figure 4 for EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network
Viaarxiv icon

Noise-robust blind reverberation time estimation using noise-aware time-frequency masking

Add code
Bookmark button
Alert button
Dec 09, 2021
Kaitong Zheng, Chengshi Zheng, Jinqiu Sang, Yulong Zhang, Xiaodong Li

Figure 1 for Noise-robust blind reverberation time estimation using noise-aware time-frequency masking
Figure 2 for Noise-robust blind reverberation time estimation using noise-aware time-frequency masking
Figure 3 for Noise-robust blind reverberation time estimation using noise-aware time-frequency masking
Figure 4 for Noise-robust blind reverberation time estimation using noise-aware time-frequency masking
Viaarxiv icon

Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement

Add code
Bookmark button
Alert button
Nov 05, 2021
Guochen Yu, Andong Li, Yutian Wang, Yinuo Guo, Hui Wang, Chengshi Zheng

Figure 1 for Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement
Figure 2 for Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement
Figure 3 for Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement
Figure 4 for Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement
Viaarxiv icon