Alert button
Picture for Shunqiao Sun

Shunqiao Sun

Alert button

IHT-Inspired Neural Network for Single-Snapshot DOA Estimation with Sparse Linear Arrays

Sep 15, 2023
Yunqiao Hu, Shunqiao Sun

Single-snapshot direction-of-arrival (DOA) estimation using sparse linear arrays (SLAs) has gained significant attention in the field of automotive MIMO radars. This is due to the dynamic nature of automotive settings, where multiple snapshots aren't accessible, and the importance of minimizing hardware costs. Low-rank Hankel matrix completion has been proposed to interpolate the missing elements in SLAs. However, the solvers of matrix completion, such as iterative hard thresholding (IHT), heavily rely on expert knowledge of hyperparameter tuning and lack task-specificity. Besides, IHT involves truncated-singular value decomposition (t-SVD), which has high computational cost in each iteration. In this paper, we propose an IHT-inspired neural network for single-snapshot DOA estimation with SLAs, termed IHT-Net. We utilize a recurrent neural network structure to parameterize the IHT algorithm. Additionally, we integrate shallow-layer autoencoders to replace t-SVD, reducing computational overhead while generating a novel optimizer through supervised learning. IHT-Net maintains strong interpretability as its network layer operations align with the iterations of the IHT algorithm. The learned optimizer exhibits fast convergence and higher accuracy in the full array signal reconstruction followed by single-snapshot DOA estimation. Numerical results validate the effectiveness of the proposed method.

* 5 pages, 5 figures 
Viaarxiv icon

Interpretable and Efficient Beamforming-Based Deep Learning for Single Snapshot DOA Estimation

Sep 14, 2023
Ruxin Zheng, Shunqiao Sun, Hongshan Liu, Honglei Chen, Jian Li

We introduce an interpretable deep learning approach for direction of arrival (DOA) estimation with a single snapshot. Classical subspace-based methods like MUSIC and ESPRIT use spatial smoothing on uniform linear arrays for single snapshot DOA estimation but face drawbacks in reduced array aperture and inapplicability to sparse arrays. Single-snapshot methods such as compressive sensing and iterative adaptation approach (IAA) encounter challenges with high computational costs and slow convergence, hampering real-time use. Recent deep learning DOA methods offer promising accuracy and speed. However, the practical deployment of deep networks is hindered by their black-box nature. To address this, we propose a deep-MPDR network translating minimum power distortionless response (MPDR)-type beamformer into deep learning, enhancing generalization and efficiency. Comprehensive experiments conducted using both simulated and real-world datasets substantiate its dominance in terms of inference time and accuracy in comparison to conventional methods. Moreover, it excels in terms of efficiency, generalizability, and interpretability when contrasted with other deep learning DOA estimation networks.

* 10 pages, 10 figures 
Viaarxiv icon

Widely Separated MIMO Radar Using Matrix Completion

Aug 29, 2023
Shunqiao Sun, Yunqiao Hu, Kumar Vijay Mishra, Athina P. Petropulu

Figure 1 for Widely Separated MIMO Radar Using Matrix Completion
Figure 2 for Widely Separated MIMO Radar Using Matrix Completion
Figure 3 for Widely Separated MIMO Radar Using Matrix Completion
Figure 4 for Widely Separated MIMO Radar Using Matrix Completion

We present a low-complexity widely separated multiple-input-multiple-output (WS-MIMO) radar that samples the signals at each of its multiple receivers at reduced rates. We process the low-rate samples of all transmit-receive chains at each receiver as data matrices. We demonstrate that each of these matrices is low rank as long as the target moves slowly within a coherent processing interval. We leverage matrix completion (MC) to recover the missing samples of each receiver signal matrix at the common fusion center. Subsequently, we estimate the targets' positions and Doppler velocities via the maximum likelihood method. Our MC-WS-MIMO approach recovers missing samples and thereafter target parameters at reduced rates without discretization. Our analysis using ambiguity functions shows that antenna geometry affects the performance of MC-WS-MIMO. Numerical experiments demonstrate reasonably accurate target localization at SNR of 20 dB and sampling rate reduction to 20%.

* 13 pages, submitted to IEEE Transactions on Radar Systems 
Viaarxiv icon

Beyond Point Clouds: A Knowledge-Aided High Resolution Imaging Radar Deep Detector for Autonomous Driving

Nov 01, 2021
Ruxin Zheng, Shunqiao Sun, David Scharff, Teresa Wu

Figure 1 for Beyond Point Clouds: A Knowledge-Aided High Resolution Imaging Radar Deep Detector for Autonomous Driving
Figure 2 for Beyond Point Clouds: A Knowledge-Aided High Resolution Imaging Radar Deep Detector for Autonomous Driving
Figure 3 for Beyond Point Clouds: A Knowledge-Aided High Resolution Imaging Radar Deep Detector for Autonomous Driving
Figure 4 for Beyond Point Clouds: A Knowledge-Aided High Resolution Imaging Radar Deep Detector for Autonomous Driving

The potentials of automotive radar for autonomous driving have not been fully exploited. We present a multi-input multi-output (MIMO) radar transmit and receive signal processing chain, a knowledge-aided approach exploiting the radar domain knowledge and signal structure, to generate high resolution radar range-azimuth spectra for object detection and classification using deep neural networks. To achieve waveform orthogonality among a large number of transmit antennas cascaded by four automotive radar transceivers, we propose a staggered time division multiplexing (TDM) scheme and velocity unfolding algorithm using both Chinese remainder theorem and overlapped array. Field experiments with multi-modal sensors were conducted at The University of Alabama. High resolution radar spectra were obtained and labeled using the camera and LiDAR recordings. Initial experiments show promising performance of object detection using an image-oriented deep neural network with an average precision of 96.1% at an intersection of union (IoU) of typically 0.5 on 2,000 radar frames.

* 6 pages, 9 figures 
Viaarxiv icon