We propose a novel integrated sensing and communication (ISAC) system that leverages sensing to assist communication, ensuring fast initial access, seamless user tracking, and uninterrupted communication for millimeter wave (mmWave) wideband systems. True-time-delayers (TTDs) are utilized to generate frequency-dependent radar rainbow beams by controlling the beam squint effect. These beams cover users across the entire angular space simultaneously for fast beam training using just one orthogonal frequency-division multiplexing (OFDM) symbol. Three detection and estimation schemes are proposed based on radar rainbow beams for estimation of the users' angles, distances, and velocities, which are then exploited for communication beamformer design. The first proposed scheme utilizes a single-antenna radar receiver and one set of rainbow beams, but may cause a Doppler ambiguity. To tackle this limitation, two additional schemes are introduced, utilizing two sets of rainbow beams and a multi-antenna receiver, respectively. Furthermore, the proposed detection and estimation schemes are extended to realize user tracking by choosing different subsets of OFDM subcarriers. This approach eliminates the need to switch phase shifters and TTDs, which are typically necessary in existing tracking technologies, thereby reducing the demands on the control circurity. Simulation results reveal the effectiveness of the proposed rainbow beam-based training and tracking methods for mobile users. Notably, the scheme employing a multi-antenna radar receiver can accurately estimate the channel parameters and can support communication rates comparable to those achieved with perfect channel information.
Recent advances in neural text-to-speech (TTS) models bring thousands of TTS applications into daily life, where models are deployed in cloud to provide services for customs. Among these models are diffusion probabilistic models (DPMs), which can be stably trained and are more parameter-efficient compared with other generative models. As transmitting data between customs and the cloud introduces high latency and the risk of exposing private data, deploying TTS models on edge devices is preferred. When implementing DPMs onto edge devices, there are two practical problems. First, current DPMs are not lightweight enough for resource-constrained devices. Second, DPMs require many denoising steps in inference, which increases latency. In this work, we present LightGrad, a lightweight DPM for TTS. LightGrad is equipped with a lightweight U-Net diffusion decoder and a training-free fast sampling technique, reducing both model parameters and inference latency. Streaming inference is also implemented in LightGrad to reduce latency further. Compared with Grad-TTS, LightGrad achieves 62.2% reduction in paramters, 65.7% reduction in latency, while preserving comparable speech quality on both Chinese Mandarin and English in 4 denoising steps.
In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding Prompt-and-Refine strategy (Figure 3), two simple but effective \textbf{training-free} methods to decrease the Token Display Time (TDT) of streaming ASR models \textbf{without any accuracy loss}. The core idea of ZeroPrompt is to append zeroed content to each chunk during inference, which acts like a prompt to encourage the model to predict future tokens even before they were spoken. We argue that streaming acoustic encoders naturally have the modeling ability of Masked Language Models and our experiments demonstrate that ZeroPrompt is engineering cheap and can be applied to streaming acoustic encoders on any dataset without any accuracy loss. Specifically, compared with our baseline models, we achieve 350 $\sim$ 700ms reduction on First Token Display Time (TDT-F) and 100 $\sim$ 400ms reduction on Last Token Display Time (TDT-L), with theoretically and experimentally equal WER on both Aishell-1 and Librispeech datasets.
In this paper, we investigate the downlink power adaptation for the suborbital node in suborbital-ground communication systems under extremely high reliability and ultra-low latency communications requirements, which can be formulated as a power threshold-minimization problem. Specifically, the interference from satellites is modeled as an accumulation of stochastic point processes on different orbit planes, and hybrid beamforming (HBF) is considered. On the other hand, to ensure the Quality of Service (QoS) constraints, the finite blocklength regime is adopted. Numerical results show that the transmit power required by the suborbital node decreases as the elevation angle increases at the receiving station.
Recently, the unified streaming and non-streaming two-pass (U2/U2++) end-to-end model for speech recognition has shown great performance in terms of streaming capability, accuracy and latency. In this paper, we present fast-U2++, an enhanced version of U2++ to further reduce partial latency. The core idea of fast-U2++ is to output partial results of the bottom layers in its encoder with a small chunk, while using a large chunk in the top layers of its encoder to compensate the performance degradation caused by the small chunk. Moreover, we use knowledge distillation method to reduce the token emission latency. We present extensive experiments on Aishell-1 dataset. Experiments and ablation studies show that compared to U2++, fast-U2++ reduces model latency from 320ms to 80ms, and achieves a character error rate (CER) of 5.06% with a streaming setup.
In this paper, we present TrimTail, a simple but effective emission regularization method to improve the latency of streaming ASR models. The core idea of TrimTail is to apply length penalty (i.e., by trimming trailing frames, see Fig. 1-(b)) directly on the spectrogram of input utterances, which does not require any alignment. We demonstrate that TrimTail is computationally cheap and can be applied online and optimized with any training loss or any model architecture on any dataset without any extra effort by applying it on various end-to-end streaming ASR networks either trained with CTC loss [1] or Transducer loss [2]. We achieve 100 $\sim$ 200ms latency reduction with equal or even better accuracy on both Aishell-1 and Librispeech. Moreover, by using TrimTail, we can achieve a 400ms algorithmic improvement of User Sensitive Delay (USD) with an accuracy loss of less than 0.2.
The recently proposed Conformer architecture which combines convolution with attention to capture both local and global dependencies has become the \textit{de facto} backbone model for Automatic Speech Recognition~(ASR). Inherited from the Natural Language Processing (NLP) tasks, the architecture takes Layer Normalization~(LN) as a default normalization technique. However, through a series of systematic studies, we find that LN might take 10\% of the inference time despite that it only contributes to 0.1\% of the FLOPs. This motivates us to replace LN with other normalization techniques, e.g., Batch Normalization~(BN), to speed up inference with the help of operator fusion methods and the avoidance of calculating the mean and variance statistics during inference. After examining several plain attempts which directly remove all LN layers or replace them with BN in the same place, we find that the divergence issue is mainly caused by the unstable layer output. We therefore propose to append a BN layer to each linear or convolution layer where stabilized training results are observed. We also propose to simplify the activations in Conformer, such as Swish and GLU, by replacing them with ReLU. All these exchanged modules can be fused into the weights of the adjacent linear/convolution layers and hence have zero inference cost. Therefore, we name it FusionFormer. Our experiments indicate that FusionFormer is as effective as the LN-based Conformer and is about 10\% faster.
In this paper, we propose a novel two-stage based uplink channel estimation strategy with reduced pilot overhead and error propagation for a reconfigurable intelligent surface (RIS)-aided multi-user (MU) millimeter wave (mmWave) system. Specifically, in Stage I, with the carefully designed RIS phase shift matrix and introduced matching matrices, all users jointly estimate the correlation factors between different paths of the common RIS-base station (BS) channel, which achieves significant multi-user diversity gain. Then, the inherent scaling ambiguity and angle ambiguity of the mmWave cascaded channel are utilized to construct an ambiguous common RIS-BS channel composed of the estimated correlation factors. In Stage II, with the constructed ambiguous common RIS-BS channel, each user uses reduced pilots to estimate their specific user-RIS channel independently so as to obtain the entire cascaded channel. The theoretical number of pilots required for the proposed method is analyzed and the simulation results are presented to validate the effectiveness of this strategy.
In this paper, we adopt a three-stage based uplink channel estimation protocol with reduced pilot overhead for an reconfigurable intelligent surface (RIS)-aided multi-user (MU) millimeter wave (mmWave) communication system, in which both the base station (BS) and the RIS are equipped with a uniform planar array (UPA). Specifically, in Stage I, the channel state information (CSI) of a typical user is estimated. To address the power leakage issue for the common angles-of-arrival (AoAs) estimation in this stage, we develop a low-complexity one-dimensional search method. In Stage II, a re-parameterized common BS-RIS channel is constructed with the estimated information from Stage I to estimate other users' CSI. In Stage III, only the rapidly varying channel gains need to re-estimated. Furthermore, the proposed method can be extended to multi-antenna UPA-type users, by decomposing the estimation of a multi-antenna channel with $J$ scatterers into estimating $J$ single-scatterer channels for a virtual single-antenna user. An orthogonal matching pursuit (OMP)-based method is proposed to estimate the angles-of-departure (AoDs) at the users. Simulation results demonstrate that the proposed algorithm significantly achieves high channel estimation accuracy, which approaches the genie-aided upper bound in the high SNR regime.