Alert button
Picture for Meng Yu

Meng Yu

Alert button

Deep Audio Zooming: Beamwidth-Controllable Neural Beamformer

Nov 22, 2023
Meng Yu, Dong Yu

Audio zooming, a signal processing technique, enables selective focusing and enhancement of sound signals from a specified region, attenuating others. While traditional beamforming and neural beamforming techniques, centered on creating a directional array, necessitate the designation of a singular target direction, they often overlook the concept of a field of view (FOV), that defines an angular area. In this paper, we proposed a simple yet effective FOV feature, amalgamating all directional attributes within the user-defined field. In conjunction, we've introduced a counter FOV feature capturing directional aspects outside the desired field. Such advancements ensure refined sound capture, particularly emphasizing the FOV's boundaries, and guarantee the enhanced capture of all desired sound sources inside the user-defined field. The results from the experiment demonstrate the efficacy of the introduced angular FOV feature and its seamless incorporation into a low-power subband model suited for real-time applica?tions.

* 6 pages, 5 figures 
Viaarxiv icon

Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression

Sep 27, 2023
Yixuan Zhang, Hao Zhang, Meng Yu, Dong Yu

Figure 1 for Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression
Figure 2 for Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression
Figure 3 for Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression
Figure 4 for Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression

Acoustic howling suppression (AHS) is a critical challenge in audio communication systems. In this paper, we propose a novel approach that leverages the power of neural networks (NN) to enhance the performance of traditional Kalman filter algorithms for AHS. Specifically, our method involves the integration of NN modules into the Kalman filter, enabling refining reference signal, a key factor in effective adaptive filtering, and estimating covariance metrics for the filter which are crucial for adaptability in dynamic conditions, thereby obtaining improved AHS performance. As a result, the proposed method achieves improved AHS performance compared to both standalone NN and Kalman filter methods. Experimental evaluations validate the effectiveness of our approach.

* Paper in submission 
Viaarxiv icon

Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks

Sep 27, 2023
Hao Zhang, Yixuan Zhang, Meng Yu, Dong Yu

Figure 1 for Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks
Figure 2 for Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks
Figure 3 for Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks
Figure 4 for Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks

In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process. This framework integrates a neural network (NN) module into the closed-loop system during training with signals generated recursively on the fly to closely mimic the streaming process of acoustic howling suppression (AHS). The proposed recursive training strategy bridges the gap between training and real-world inference scenarios, marking a departure from previous NN-based methods that typically approach AHS as either noise suppression or acoustic echo cancellation. Within this framework, we explore two methodologies: one exclusively relying on NN and the other combining NN with the traditional Kalman filter. Additionally, we propose strategies, including howling detection and initialization using pre-trained offline models, to bolster trainability and expedite the training process. Experimental results validate that this framework offers a substantial improvement over previous methodologies for acoustic howling suppression.

* Paper in submission 
Viaarxiv icon

Unifying Robustness and Fidelity: A Comprehensive Study of Pretrained Generative Methods for Speech Enhancement in Adverse Conditions

Sep 16, 2023
Heming Wang, Meng Yu, Hao Zhang, Chunlei Zhang, Zhongweiyang Xu, Muqiao Yang, Yixuan Zhang, Dong Yu

Figure 1 for Unifying Robustness and Fidelity: A Comprehensive Study of Pretrained Generative Methods for Speech Enhancement in Adverse Conditions
Figure 2 for Unifying Robustness and Fidelity: A Comprehensive Study of Pretrained Generative Methods for Speech Enhancement in Adverse Conditions
Figure 3 for Unifying Robustness and Fidelity: A Comprehensive Study of Pretrained Generative Methods for Speech Enhancement in Adverse Conditions
Figure 4 for Unifying Robustness and Fidelity: A Comprehensive Study of Pretrained Generative Methods for Speech Enhancement in Adverse Conditions

Enhancing speech signal quality in adverse acoustic environments is a persistent challenge in speech processing. Existing deep learning based enhancement methods often struggle to effectively remove background noise and reverberation in real-world scenarios, hampering listening experiences. To address these challenges, we propose a novel approach that uses pre-trained generative methods to resynthesize clean, anechoic speech from degraded inputs. This study leverages pre-trained vocoder or codec models to synthesize high-quality speech while enhancing robustness in challenging scenarios. Generative methods effectively handle information loss in speech signals, resulting in regenerated speech that has improved fidelity and reduced artifacts. By harnessing the capabilities of pre-trained models, we achieve faithful reproduction of the original speech in adverse conditions. Experimental evaluations on both simulated datasets and realistic samples demonstrate the effectiveness and robustness of our proposed methods. Especially by leveraging codec, we achieve superior subjective scores for both simulated and realistic recordings. The generated speech exhibits enhanced audio quality, reduced background noise, and reverberation. Our findings highlight the potential of pre-trained generative techniques in speech processing, particularly in scenarios where traditional methods falter. Demos are available at https://whmrtm.github.io/SoundResynthesis.

* Paper in submission 
Viaarxiv icon

Deep Learning for Joint Acoustic Echo and Acoustic Howling Suppression in Hybrid Meetings

May 04, 2023
Hao Zhang, Meng Yu, Dong Yu

Figure 1 for Deep Learning for Joint Acoustic Echo and Acoustic Howling Suppression in Hybrid Meetings
Figure 2 for Deep Learning for Joint Acoustic Echo and Acoustic Howling Suppression in Hybrid Meetings
Figure 3 for Deep Learning for Joint Acoustic Echo and Acoustic Howling Suppression in Hybrid Meetings
Figure 4 for Deep Learning for Joint Acoustic Echo and Acoustic Howling Suppression in Hybrid Meetings

Hybrid meetings have become increasingly necessary during the post-COVID period and also brought new challenges for solving audio-related problems. In particular, the interplay between acoustic echo and acoustic howling in a hybrid meeting makes the joint suppression of them difficult. This paper proposes a deep learning approach to tackle this problem by formulating a recurrent feedback suppression process as an instantaneous speech separation task using the teacher-forced training strategy. Specifically, a self-attentive recurrent neural network is utilized to extract the target speech from microphone recordings with accessible and learned reference signals, thus suppressing acoustic echo and acoustic howling simultaneously. Different combinations of input signals and loss functions have been investigated for performance improvement. Experimental results demonstrate the effectiveness of the proposed method for suppressing echo and howling jointly in hybrid meetings.

Viaarxiv icon

Hybrid AHS: A Hybrid of Kalman Filter and Deep Learning for Acoustic Howling Suppression

May 04, 2023
Hao Zhang, Meng Yu, Yuzhong Wu, Tao Yu, Dong Yu

Figure 1 for Hybrid AHS: A Hybrid of Kalman Filter and Deep Learning for Acoustic Howling Suppression
Figure 2 for Hybrid AHS: A Hybrid of Kalman Filter and Deep Learning for Acoustic Howling Suppression
Figure 3 for Hybrid AHS: A Hybrid of Kalman Filter and Deep Learning for Acoustic Howling Suppression
Figure 4 for Hybrid AHS: A Hybrid of Kalman Filter and Deep Learning for Acoustic Howling Suppression

Deep learning has been recently introduced for efficient acoustic howling suppression (AHS). However, the recurrent nature of howling creates a mismatch between offline training and streaming inference, limiting the quality of enhanced speech. To address this limitation, we propose a hybrid method that combines a Kalman filter with a self-attentive recurrent neural network (SARNN) to leverage their respective advantages for robust AHS. During offline training, a pre-processed signal obtained from the Kalman filter and an ideal microphone signal generated via teacher-forced training strategy are used to train the deep neural network (DNN). During streaming inference, the DNN's parameters are fixed while its output serves as a reference signal for updating the Kalman filter. Evaluation in both offline and streaming inference scenarios using simulated and real-recorded data shows that the proposed method efficiently suppresses howling and consistently outperforms baselines.

* submitted to INTERSPEECH 2023 
Viaarxiv icon

Deep AHS: A Deep Learning Approach to Acoustic Howling Suppression

Feb 18, 2023
Hao Zhang, Meng Yu, Dong Yu

Figure 1 for Deep AHS: A Deep Learning Approach to Acoustic Howling Suppression
Figure 2 for Deep AHS: A Deep Learning Approach to Acoustic Howling Suppression
Figure 3 for Deep AHS: A Deep Learning Approach to Acoustic Howling Suppression
Figure 4 for Deep AHS: A Deep Learning Approach to Acoustic Howling Suppression

In this paper, we formulate acoustic howling suppression (AHS) as a supervised learning problem and propose a deep learning approach, called Deep AHS, to address it. Deep AHS is trained in a teacher forcing way which converts the recurrent howling suppression process into an instantaneous speech separation process to simplify the problem and accelerate the model training. The proposed method utilizes properly designed features and trains an attention based recurrent neural network (RNN) to extract the target signal from the microphone recording, thus attenuating the playback signal that may lead to howling. Different training strategies are investigated and a streaming inference method implemented in a recurrent mode used to evaluate the performance of the proposed method for real-time howling suppression. Deep AHS avoids howling detection and intrinsically prohibits howling from happening, allowing for more flexibility in the design of audio systems. Experimental results show the effectiveness of the proposed method for howling suppression under different scenarios.

* Accepted for publication in 2023 ICASSP 
Viaarxiv icon

NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation

Feb 03, 2023
Yixuan Zhang, Meng Yu, Hao Zhang, Dong Yu, DeLiang Wang

Figure 1 for NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation
Figure 2 for NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation
Figure 3 for NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation
Figure 4 for NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation

The Kalman filter is widely used for addressing acoustic echo cancellation (AEC) problems due to their robustness to double-talk and fast convergence. However, the inability to model nonlinearity and the need to tune control parameters cast limitations on such adaptive filtering algorithms. In this paper, we integrate the frequency domain Kalman filter (FDKF) and deep neural networks (DNNs) into a hybrid method, called NeuralKalman, to leverage the advantages of deep learning and adaptive filtering algorithms. Specifically, we employ a DNN to estimate nonlinearly distorted far-end signals, a transition factor, and the nonlinear transition function in the state equation of the FDKF algorithm. Experimental results show that the proposed NeuralKalman improves the performance of FDKF significantly and outperforms strong baseline methods.

* The term of the algorithm is renamed because it conflicts with an existing KalmanNet algorithm proposed by Revach et. al. (arXiv:2107.10043) 
Viaarxiv icon

KalmanNet: A Learnable Kalman Filter for Acoustic Echo Cancellation

Jan 29, 2023
Yixuan Zhang, Meng Yu, Hao Zhang, Dong Yu, DeLiang Wang

Figure 1 for KalmanNet: A Learnable Kalman Filter for Acoustic Echo Cancellation
Figure 2 for KalmanNet: A Learnable Kalman Filter for Acoustic Echo Cancellation
Figure 3 for KalmanNet: A Learnable Kalman Filter for Acoustic Echo Cancellation
Figure 4 for KalmanNet: A Learnable Kalman Filter for Acoustic Echo Cancellation

The Kalman filter is widely used for addressing acoustic echo cancellation (AEC) problems due to their robustness to double-talk and fast convergence. However, the inability to model nonlinearity and the need to tune control parameters cast limitations on such adaptive filtering algorithms. In this paper, we integrate the frequency domain Kalman filter (FDKF) and deep neural networks (DNNs) into a hybrid method, called KalmanNet, to leverage the advantages of deep learning and adaptive filtering algorithms. Specifically, we employ a DNN to estimate nonlinearly distorted far-end signals, a transition factor, and the nonlinear transition function in the state equation of the FDKF algorithm. Experimental results show that the proposed KalmanNet improves the performance of FDKF significantly and outperforms strong baseline methods.

Viaarxiv icon

Deep Neural Mel-Subband Beamformer for In-car Speech Separation

Nov 22, 2022
Vinay Kothapally, Yong Xu, Meng Yu, Shi-Xiong Zhang, Dong Yu

Figure 1 for Deep Neural Mel-Subband Beamformer for In-car Speech Separation
Figure 2 for Deep Neural Mel-Subband Beamformer for In-car Speech Separation
Figure 3 for Deep Neural Mel-Subband Beamformer for In-car Speech Separation

While current deep learning (DL)-based beamforming techniques have been proved effective in speech separation, they are often designed to process narrow-band (NB) frequencies independently which results in higher computational costs and inference times, making them unsuitable for real-world use. In this paper, we propose DL-based mel-subband spatio-temporal beamformer to perform speech separation in a car environment with reduced computation cost and inference time. As opposed to conventional subband (SB) approaches, our framework uses a mel-scale based subband selection strategy which ensures a fine-grained processing for lower frequencies where most speech formant structure is present, and coarse-grained processing for higher frequencies. In a recursive way, robust frame-level beamforming weights are determined for each speaker location/zone in a car from the estimated subband speech and noise covariance matrices. Furthermore, proposed framework also estimates and suppresses any echoes from the loudspeaker(s) by using the echo reference signals. We compare the performance of our proposed framework to several NB, SB, and full-band (FB) processing techniques in terms of speech quality and recognition metrics. Based on experimental evaluations on simulated and real-world recordings, we find that our proposed framework achieves better separation performance over all SB and FB approaches and achieves performance closer to NB processing techniques while requiring lower computing cost.

* Submitted to ICASSP 2023 
Viaarxiv icon