Abstract:Two-channel modulo analog-to-digital converters (ADCs) enable high-dynamic-range signal sensing at the Nyquist rate per channel, but existing designs quantise both channel outputs independently, incurring redundant bitrate costs. This paper proposes a bit-efficient quantisation scheme that exploits the integer-valued structure of inter-channel differences, transmitting one quantised channel output together with a compact difference index. We prove that this approach requires only 1-2 bits per signal sample overhead relative to conventional ADCs, despite operating with a much smaller per-channel dynamic range. Simulations confirm the theoretical error bounds and bitrate analysis, while hardware experiments demonstrate substantial bitrate savings compared with existing modulo sampling schemes, while maintaining comparable reconstruction accuracy. These results highlight a practical path towards high-resolution, bandwidth-efficient modulo ADCs for bitrate-constrained systems.




Abstract:Conventional analog-to-digital converters (ADCs) clip when signals exceed their input range. Modulo (unlimited) sampling overcomes this limitation by folding the signal before digitization, but existing recovery methods are either computationally intensive or constrained by loose oversampling bounds that demand high sampling rates. In addition, none account for sampling jitter, which is unavoidable in practice. This paper revisits difference-based recovery and establishes new theoretical and practical guarantees. In the noiseless setting, we prove that arbitrarily high difference order reduces the sufficient oversampling factor from $2\pi e$ to $\pi$, substantially tightening classical bounds. For fixed order $N$, we derive a noise-aware sampling condition that guarantees stable recovery. For second-order difference-based recovery ($N=2$), we further extend the analysis to non-uniform sampling, proving robustness under bounded jitter. An FPGA-based hardware prototype demonstrates reliable reconstruction with amplitude expansion up to $\rho = 108$, confirming the feasibility of high-performance unlimited sensing with a simple and robust recovery pipeline.
Abstract:In this paper, a new speech feature fusion method is proposed for speaker recognition on the basis of the cross gate parallel convolutional neural network (CG-PCNN). The Mel filter bank features (MFBFs) of different frequency resolutions can be extracted from each speech frame of a speaker's speech by several Mel filter banks, where the numbers of the triangular filters in the Mel filter banks are different. Due to the frequency resolutions of these MFBFs are different, there are some complementaries for these MFBFs. The CG-PCNN is utilized to extract the deep features from these MFBFs, which applies a cross gate mechanism to capture the complementaries for improving the performance of the speaker recognition system. Then, the fusion feature can be obtained by concatenating these deep features for speaker recognition. The experimental results show that the speaker recognition system with the proposed speech feature fusion method is effective, and marginally outperforms the existing state-of-the-art systems.