This paper introduces a practical approach for leveraging a real-time deep learning model to alternate between speech enhancement and joint speech enhancement and separation depending on whether the input mixture contains one or two active speakers. Scale-invariant signal-to-distortion ratio (SI-SDR) has shown to be a highly effective training measure in time-domain speech separation. However, the SI-SDR metric is ill-defined for zero-energy target signals, which is a problem when training a speech separation model using utterances with varying numbers of talkers. Unlike existing solutions that focus on modifying the loss function to accommodate zero-energy target signals, the proposed approach circumvents this problem by training the model to extract speech on both its output channels regardless if the input is a single or dual-talker mixture. A lightweight speaker overlap detection (SOD) module is also introduced to differentiate between single and dual-talker segments in real-time. The proposed module takes advantage of the new formulation by operating directly on the separated masks, given by the separation model, instead of the original mixture, thus effectively simplifying the detection task. Experimental results show that the proposed training approach outperforms existing solutions, and the SOD module exhibits high accuracy.
This paper presents a method for real-time estimation of 2-dimensional direction of arrival (2D-DOA) of one or more sound sources using a nonlinear array of three microphones. 2D-DOA is estimated employing frame-level time difference of arrival (TDOA) measurements. Unlike conventional methods, which infer location parameters from TDOAs using a theoretical model, we propose a more practical approach based on supervised learning. The proposed model employs nearest neighbor search (NNS) applied to a spherical Fibonacci lattice consisting of TDOA to 2D-DOA mappings learned directly in the field. Filtering and clustering post-processors are also introduced for improved source detection and localization robustness.
Invariance to microphone array configuration is a rare attribute in neural beamformers. Filter-and-sum (FS) methods in this class define the target signal with respect to a reference channel. However, this not only complicates formulation in reverberant conditions but also the network, which must have a mechanism to infer what the reference channel is. To address these issues, this study presents Delay Filter-and-Sum Network (DFSNet), a steerable neural beamformer invariant to microphone number and array geometry for causal speech enhancement. In DFSNet, acquired signals are first steered toward the speech source direction prior to the FS operation, which simplifies the task into the estimation of delay-and-summed reverberant clean speech. The proposed model is designed to incur low latency, distortion, and memory and computational burden, giving rise to high potential in hearing aid applications. Simulation results reveal comparable performance to noncausal state-of-the-art.
This study presents a closed-form solution for localizing and synchronizing an acoustic sensor node with respect to a Wireless Acoustic Sensor Network (WASN). The aim is to allow efficient scaling of a WASN by individually calibrating newly joined sensor nodes instead of recalibrating the entire array. A key contribution is that the sensor to be calibrated does not need to include a built-in emitter. The proposed method uses signals emitted from spatially distributed sources to compute time difference of arrival (TDOA) measurements between the existing WASN and a new sensor. The problem is then modeled as a set of multivariate nonlinear TDOA equations. Through a simple transformation, the nonlinear TDOA equations are converted into a system of linear equations. Then, weighted least squares (WLS) is applied to find an accurate estimate of the calibration parameters. Signal sources can either be known emitters within the existing WASN or arbitrary sources in the environment, thus allowing for flexible applicability in both active and passive calibration scenarios. Simulation results under various conditions show high joint localization and synchronization performance, often comparable to the Cram\'er-Rao lower bound (CRLB).
This study presents UX-Net, a time-domain audio separation network (TasNet) based on a modified U-Net architecture. The proposed UX-Net works in real-time and handles either single or multi-microphone input. Inspired by the filter-and-process-based human auditory behavior, the proposed system introduces novel mixer and separation modules, which result in cost and memory efficient modeling of speech sources. The mixer module combines encoded input in a latent feature space and outputs a desired number of output streams. Then, in the separation module, a modified U-Net (UX) block is applied. The UX block first filters the encoded input at various resolutions followed by aggregating the filtered information and applying recurrent processing to estimate masks of separated sources. The letter 'X' in UX-Net is a name placeholder for the type of recurrent layer employed in the UX block. Empirical findings on the WSJ0-2mix benchmark dataset show that one of the UX-Net configurations outperforms the state-of-the-art Conv-TasNet system by 0.85 dB SI-SNR while using only 16% of the model parameters, 58% fewer computations, and maintaining low latency.