Alert button
Picture for Lei Cheng

Lei Cheng

Alert button

Online Targetless Radar-Camera Extrinsic Calibration Based on the Common Features of Radar and Camera

Sep 02, 2023
Lei Cheng, Siyang Cao

Sensor fusion is essential for autonomous driving and autonomous robots, and radar-camera fusion systems have gained popularity due to their complementary sensing capabilities. However, accurate calibration between these two sensors is crucial to ensure effective fusion and improve overall system performance. Calibration involves intrinsic and extrinsic calibration, with the latter being particularly important for achieving accurate sensor fusion. Unfortunately, many target-based calibration methods require complex operating procedures and well-designed experimental conditions, posing challenges for researchers attempting to reproduce the results. To address this issue, we introduce a novel approach that leverages deep learning to extract a common feature from raw radar data (i.e., Range-Doppler-Angle data) and camera images. Instead of explicitly representing these common features, our method implicitly utilizes these common features to match identical objects from both data sources. Specifically, the extracted common feature serves as an example to demonstrate an online targetless calibration method between the radar and camera systems. The estimation of the extrinsic transformation matrix is achieved through this feature-based approach. To enhance the accuracy and robustness of the calibration, we apply the RANSAC and Levenberg-Marquardt (LM) nonlinear optimization algorithm for deriving the matrix. Our experiments in the real world demonstrate the effectiveness and accuracy of our proposed method.

Viaarxiv icon

Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks

Aug 09, 2023
Siyuan Li, Lei Cheng, Ting Zhang, Hangfang Zhao, Jianlong Li

Figure 1 for Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks
Figure 2 for Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks
Figure 3 for Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks
Figure 4 for Striking The Right Balance: Three-Dimensional Ocean Sound Speed Field Reconstruction Using Tensor Neural Networks

Accurately reconstructing a three-dimensional ocean sound speed field (3D SSF) is essential for various ocean acoustic applications, but the sparsity and uncertainty of sound speed samples across a vast ocean region make it a challenging task. To tackle this challenge, a large body of reconstruction methods has been developed, including spline interpolation, matrix/tensor-based completion, and deep neural networks-based reconstruction. However, a principled analysis of their effectiveness in 3D SSF reconstruction is still lacking. This paper performs a thorough analysis of the reconstruction error and highlights the need for a balanced representation model that integrates both expressiveness and conciseness. To meet this requirement, a 3D SSF-tailored tensor deep neural network is proposed, which utilizes tensor computations and deep neural network architectures to achieve remarkable 3D SSF reconstruction. The proposed model not only includes the previous tensor-based SSF representation model as a special case, but also has a natural ability to reject noise. The numerical results using the South China Sea 3D SSF data demonstrate that the proposed method outperforms state-of-the-art methods. The code is available at https://github.com/OceanSTARLab/Tensor-Neural-Network.

Viaarxiv icon

3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration

Jul 28, 2023
Lei Cheng, Arindam Sengupta, Siyang Cao

Figure 1 for 3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration
Figure 2 for 3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration
Figure 3 for 3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration
Figure 4 for 3D Radar and Camera Co-Calibration: A Flexible and Accurate Method for Target-based Extrinsic Calibration

Advances in autonomous driving are inseparable from sensor fusion. Heterogeneous sensors are widely used for sensor fusion due to their complementary properties, with radar and camera being the most equipped sensors. Intrinsic and extrinsic calibration are essential steps in sensor fusion. The extrinsic calibration, independent of the sensor's own parameters, and performed after the sensors are installed, greatly determines the accuracy of sensor fusion. Many target-based methods require cumbersome operating procedures and well-designed experimental conditions, making them extremely challenging. To this end, we propose a flexible, easy-to-reproduce and accurate method for extrinsic calibration of 3D radar and camera. The proposed method does not require a specially designed calibration environment, and instead places a single corner reflector (CR) on the ground to iteratively collect radar and camera data simultaneously using Robot Operating System (ROS), and obtain radar-camera point correspondences based on their timestamps, and then use these point correspondences as input to solve the perspective-n-point (PnP) problem, and finally get the extrinsic calibration matrix. Also, RANSAC is used for robustness and the Levenberg-Marquardt (LM) nonlinear optimization algorithm is used for accuracy. Multiple controlled environment experiments as well as real-world experiments demonstrate the efficiency and accuracy (AED error is 15.31 pixels and Acc up to 89\%) of the proposed method.

Viaarxiv icon

Multipath Time-delay Estimation with Impulsive Noise via Bayesian Compressive Sensing

Jul 05, 2023
Xingyu Ji, Lei Cheng, Hangfang Zhao

Figure 1 for Multipath Time-delay Estimation with Impulsive Noise via Bayesian Compressive Sensing
Figure 2 for Multipath Time-delay Estimation with Impulsive Noise via Bayesian Compressive Sensing
Figure 3 for Multipath Time-delay Estimation with Impulsive Noise via Bayesian Compressive Sensing
Figure 4 for Multipath Time-delay Estimation with Impulsive Noise via Bayesian Compressive Sensing

Multipath time-delay estimation is commonly encountered in radar and sonar signal processing. In some real-life environments, impulse noise is ubiquitous and significantly degrades estimation performance. Here, we propose a Bayesian approach to tailor the Bayesian Compressive Sensing (BCS) to mitigate impulsive noises. In particular, a heavy-tail Laplacian distribution is used as a statistical model for impulse noise, while Laplacian prior is used for sparse multipath modeling. The Bayesian learning problem contains hyperparameters learning and parameter estimation, solved under the BCS inference framework. The performance of our proposed method is compared with benchmark methods, including compressive sensing (CS), BCS, and Laplacian-prior BCS (L-BCS). The simulation results show that our proposed method can estimate the multipath parameters more accurately and have a lower root mean squared estimation error (RMSE) in intensely impulsive noise.

Viaarxiv icon

Overcoming Beam Squint in Dual-Wideband mmWave MIMO Channel Estimation: A Bayesian Multi-Band Sparsity Approach

Jun 19, 2023
Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu, H. Vincent Poor

Figure 1 for Overcoming Beam Squint in Dual-Wideband mmWave MIMO Channel Estimation: A Bayesian Multi-Band Sparsity Approach
Figure 2 for Overcoming Beam Squint in Dual-Wideband mmWave MIMO Channel Estimation: A Bayesian Multi-Band Sparsity Approach
Figure 3 for Overcoming Beam Squint in Dual-Wideband mmWave MIMO Channel Estimation: A Bayesian Multi-Band Sparsity Approach
Figure 4 for Overcoming Beam Squint in Dual-Wideband mmWave MIMO Channel Estimation: A Bayesian Multi-Band Sparsity Approach

The beam squint effect, which manifests in different steering matrices in different sub-bands, has been widely considered a challenge in millimeter wave (mmWave) multiinput multi-output (MIMO) channel estimation. Existing methods either require specific forms of the precoding/combining matrix, which restrict their general practicality, or simply ignore the beam squint effect by only making use of a single sub-band for channel estimation. Recognizing that different steering matrices are coupled by the same set of unknown channel parameters, this paper proposes to exploit the common sparsity structure of the virtual channel model so that signals from different subbands can be jointly utilized to enhance the performance of channel estimation. A probabilistic model is built to induce the common sparsity in the spatial domain, and the first-order Taylor expansion is adopted to get rid of the grid mismatch in the dictionaries. To learn the model parameters, a variational expectation-maximization (EM) algorithm is derived, which automatically obtains the balance between the likelihood function and the common sparsity prior information, and is applicable to arbitrary forms of precoding/combining matrices. Simulation results show the superior estimation accuracy of the proposed algorithm over existing methods under different noise powers and system configurations.

Viaarxiv icon

To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion

Jun 19, 2023
Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu

Figure 1 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Figure 2 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Figure 3 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion
Figure 4 for To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion

Tensor train (TT) representation has achieved tremendous success in visual data completion tasks, especially when it is combined with tensor folding. However, folding an image or video tensor breaks the original data structure, leading to local information loss as nearby pixels may be assigned into different dimensions and become far away from each other. In this paper, to fully preserve the local information of the original visual data, we explore not folding the data tensor, and at the same time adopt graph information to regularize local similarity between nearby entries. To overcome the high computational complexity introduced by the graph-based regularization in the TT completion problem, we propose to break the original problem into multiple sub-problems with respect to each TT core fiber, instead of each TT core as in traditional methods. Furthermore, to avoid heavy parameter tuning, a sparsity promoting probabilistic model is built based on the generalized inverse Gaussian (GIG) prior, and an inference algorithm is derived under the mean-field approximation. Experiments on both synthetic data and real-world visual data show the superiority of the proposed methods.

Viaarxiv icon

Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?

Apr 03, 2023
Wen Shen, Lei Cheng, Yuxiao Yang, Mingjie Li, Quanshi Zhang

Figure 1 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Figure 2 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Figure 3 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Figure 4 for Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?

In this paper, we explain the inference logic of large language models (LLMs) as a set of symbolic concepts. Many recent studies have discovered that traditional DNNs usually encode sparse symbolic concepts. However, because an LLM has much more parameters than traditional DNNs, whether the LLM also encodes sparse symbolic concepts is still an open problem. Therefore, in this paper, we propose to disentangle the inference score of LLMs for dialogue tasks into a small number of symbolic concepts. We verify that we can use those sparse concepts to well estimate all inference scores of the LLM on all arbitrarily masking states of the input sentence. We also evaluate the transferability of concepts encoded by an LLM and verify that symbolic concepts usually exhibit high transferability across similar input sentences. More crucially, those symbolic concepts can be used to explain the exact reasons accountable for the LLM's prediction errors.

Viaarxiv icon

GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

Jan 31, 2023
Xin Dong, Ruize Wu, Chao Xiong, Hai Li, Lei Cheng, Yong He, Shiyou Qian, Jian Cao, Linjian Mo

Figure 1 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Figure 2 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Figure 3 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning
Figure 4 for GDOD: Effective Gradient Descent using Orthogonal Decomposition for Multi-Task Learning

Multi-task learning (MTL) aims at solving multiple related tasks simultaneously and has experienced rapid growth in recent years. However, MTL models often suffer from performance degeneration with negative transfer due to learning several tasks simultaneously. Some related work attributed the source of the problem is the conflicting gradients. In this case, it is needed to select useful gradient updates for all tasks carefully. To this end, we propose a novel optimization approach for MTL, named GDOD, which manipulates gradients of each task using an orthogonal basis decomposed from the span of all task gradients. GDOD decomposes gradients into task-shared and task-conflict components explicitly and adopts a general update rule for avoiding interference across all task gradients. This allows guiding the update directions depending on the task-shared components. Moreover, we prove the convergence of GDOD theoretically under both convex and non-convex assumptions. Experiment results on several multi-task datasets not only demonstrate the significant improvement of GDOD performed to existing MTL models but also prove that our algorithm outperforms state-of-the-art optimization methods in terms of AUC and Logloss metrics.

* Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022: 386-395  
Viaarxiv icon

Output-Dependent Gaussian Process State-Space Model

Dec 15, 2022
Zhidi Lin, Lei Cheng, Feng Yin, Lexi Xu, Shuguang Cui

Figure 1 for Output-Dependent Gaussian Process State-Space Model
Figure 2 for Output-Dependent Gaussian Process State-Space Model
Figure 3 for Output-Dependent Gaussian Process State-Space Model
Figure 4 for Output-Dependent Gaussian Process State-Space Model

Gaussian process state-space model (GPSSM) is a fully probabilistic state-space model that has attracted much attention over the past decade. However, the outputs of the transition function in the existing GPSSMs are assumed to be independent, meaning that the GPSSMs cannot exploit the inductive biases between different outputs and lose certain model capacities. To address this issue, this paper proposes an output-dependent and more realistic GPSSM by utilizing the well-known, simple yet practical linear model of coregionalization (LMC) framework to represent the output dependency. To jointly learn the output-dependent GPSSM and infer the latent states, we propose a variational sparse GP-based learning method that only gently increases the computational complexity. Experiments on both synthetic and real datasets demonstrate the superiority of the output-dependent GPSSM in terms of learning and inference performance.

* 5 pages, 4 figures 
Viaarxiv icon

ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths

Jun 12, 2022
Ruslan Khalitov, Tong Yu, Lei Cheng, Zhirong Yang

Figure 1 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Figure 2 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Figure 3 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths
Figure 4 for ChordMixer: A Scalable Neural Attention Model for Sequences with Different Lengths

Sequential data naturally have different lengths in many domains, with some very long sequences. As an important modeling tool, neural attention should capture long-range interaction in such sequences. However, most existing neural attention models admit only short sequences, or they have to employ chunking or padding to enforce a constant input length. Here we propose a simple neural network building block called ChordMixer which can model the attention for long sequences with variable lengths. Each ChordMixer block consists of a position-wise rotation layer without learnable parameters and an element-wise MLP layer. Repeatedly applying such blocks forms an effective network backbone that mixes the input signals towards the learning targets. We have tested ChordMixer on the synthetic adding problem, long document classification, and DNA sequence-based taxonomy classification. The experiment results show that our method substantially outperforms other neural attention models.

Viaarxiv icon