Alert button
Picture for Tingfa Xu

Tingfa Xu

Alert button

Spectral-wise Implicit Neural Representation for Hyperspectral Image Reconstruction

Dec 02, 2023
Huan Chen, Wangcai Zhao, Tingfa Xu, Shiyun Zhou, Peifu Liu, Jianan Li

Coded Aperture Snapshot Spectral Imaging (CASSI) reconstruction aims to recover the 3D spatial-spectral signal from 2D measurement. Existing methods for reconstructing Hyperspectral Image (HSI) typically involve learning mappings from a 2D compressed image to a predetermined set of discrete spectral bands. However, this approach overlooks the inherent continuity of the spectral information. In this study, we propose an innovative method called Spectral-wise Implicit Neural Representation (SINR) as a pioneering step toward addressing this limitation. SINR introduces a continuous spectral amplification process for HSI reconstruction, enabling spectral super-resolution with customizable magnification factors. To achieve this, we leverage the concept of implicit neural representation. Specifically, our approach introduces a spectral-wise attention mechanism that treats individual channels as distinct tokens, thereby capturing global spectral dependencies. Additionally, our approach incorporates two components, namely a Fourier coordinate encoder and a spectral scale factor module. The Fourier coordinate encoder enhances the SINR's ability to emphasize high-frequency components, while the spectral scale factor module guides the SINR to adapt to the variable number of spectral channels. Notably, the SINR framework enhances the flexibility of CASSI reconstruction by accommodating an unlimited number of spectral bands in the desired output. Extensive experiments demonstrate that our SINR outperforms baseline methods. By enabling continuous reconstruction within the CASSI framework, we take the initial stride toward integrating implicit neural representation into the field.

* Accepted by IEEE Transactions on Circuits and Systems for Video Technology, to be published 
Viaarxiv icon

Spectrum-driven Mixed-frequency Network for Hyperspectral Salient Object Detection

Dec 02, 2023
Peifu Liu, Tingfa Xu, Huan Chen, Shiyun Zhou, Haolin Qin, Jianan Li

Hyperspectral salient object detection (HSOD) aims to detect spectrally salient objects in hyperspectral images (HSIs). However, existing methods inadequately utilize spectral information by either converting HSIs into false-color images or converging neural networks with clustering. We propose a novel approach that fully leverages the spectral characteristics by extracting two distinct frequency components from the spectrum: low-frequency Spectral Saliency and high-frequency Spectral Edge. The Spectral Saliency approximates the region of salient objects, while the Spectral Edge captures edge information of salient objects. These two complementary components, crucial for HSOD, are derived by computing from the inter-layer spectral angular distance of the Gaussian pyramid and the intra-neighborhood spectral angular gradients, respectively. To effectively utilize this dual-frequency information, we introduce a novel lightweight Spectrum-driven Mixed-frequency Network (SMN). SMN incorporates two parameter-free plug-and-play operators, namely Spectral Saliency Generator and Spectral Edge Operator, to extract the Spectral Saliency and Spectral Edge components from the input HSI independently. Subsequently, the Mixed-frequency Attention module, comprised of two frequency-dependent heads, intelligently combines the embedded features of edge and saliency information, resulting in a mixed-frequency feature representation. Furthermore, a saliency-edge-aware decoder progressively scales up the mixed-frequency feature while preserving rich detail and saliency information for accurate salient object prediction. Extensive experiments conducted on the HS-SOD benchmark and our custom dataset HSOD-BIT demonstrate that our SMN outperforms state-of-the-art methods regarding HSOD performance. Code and dataset will be available at https://github.com/laprf/SMN.

* Accepted by IEEE Transactions on Multimedia, to be published 
Viaarxiv icon

Sample-adaptive Augmentation for Point Cloud Recognition Against Real-world Corruptions

Sep 19, 2023
Jie Wang, Lihe Ding, Tingfa Xu, Shaocong Dong, Xinli Xu, Long Bai, Jianan Li

Figure 1 for Sample-adaptive Augmentation for Point Cloud Recognition Against Real-world Corruptions
Figure 2 for Sample-adaptive Augmentation for Point Cloud Recognition Against Real-world Corruptions
Figure 3 for Sample-adaptive Augmentation for Point Cloud Recognition Against Real-world Corruptions
Figure 4 for Sample-adaptive Augmentation for Point Cloud Recognition Against Real-world Corruptions

Robust 3D perception under corruption has become an essential task for the realm of 3D vision. While current data augmentation techniques usually perform random transformations on all point cloud objects in an offline way and ignore the structure of the samples, resulting in over-or-under enhancement. In this work, we propose an alternative to make sample-adaptive transformations based on the structure of the sample to cope with potential corruption via an auto-augmentation framework, named as AdaptPoint. Specially, we leverage a imitator, consisting of a Deformation Controller and a Mask Controller, respectively in charge of predicting deformation parameters and producing a per-point mask, based on the intrinsic structural information of the input point cloud, and then conduct corruption simulations on top. Then a discriminator is utilized to prevent the generation of excessive corruption that deviates from the original data distribution. In addition, a perception-guidance feedback mechanism is incorporated to guide the generation of samples with appropriate difficulty level. Furthermore, to address the paucity of real-world corrupted point cloud, we also introduce a new dataset ScanObjectNN-C, that exhibits greater similarity to actual data in real-world environments, especially when contrasted with preceding CAD datasets. Experiments show that our method achieves state-of-the-art results on multiple corruption benchmarks, including ModelNet-C, our ScanObjectNN-C, and ShapeNet-C.

* Accepted by ICCV2023; code: https://github.com/Roywangj/AdaptPoint 
Viaarxiv icon

RDFNet: Regional Dynamic FISTA-Net for Spectral Snapshot Compressive Imaging

Feb 06, 2023
Shiyun Zhou, Tingfa Xu, Shaocong Dong, Jianan Li

Figure 1 for RDFNet: Regional Dynamic FISTA-Net for Spectral Snapshot Compressive Imaging
Figure 2 for RDFNet: Regional Dynamic FISTA-Net for Spectral Snapshot Compressive Imaging
Figure 3 for RDFNet: Regional Dynamic FISTA-Net for Spectral Snapshot Compressive Imaging
Figure 4 for RDFNet: Regional Dynamic FISTA-Net for Spectral Snapshot Compressive Imaging

Deep convolutional neural networks have recently shown promising results in compressive spectral reconstruction. Previous methods, however, usually adopt a single mapping function for sparse representation. Considering that different regions have distinct characteristics, it is desirable to apply various mapping functions to adjust different regions' transformations dynamically. With this in mind, we first introduce a regional dynamic way of using Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) to exploit regional characteristics and derive dynamic sparse representations. Then, we propose to unfold the process into a hierarchical dynamic deep network, dubbed RDFNet. The network comprises multiple regional dynamic blocks and corresponding pixel-wise adaptive soft-thresholding modules, respectively in charge of region-based dynamic mapping and pixel-wise soft-thresholding selection. The regional dynamic block guides the network to adjust the transformation domain for different regions. Equipped with the adaptive soft-thresholding, our proposed regional dynamic architecture can also learn appropriate shrinkage scale in a pixel-wise manner. Extensive experiments on both simulated and real data demonstrate that our method outperforms prior state-of-the-arts.

* IEEE Transactions on Computational Imaging 
Viaarxiv icon

Automated Optical Inspection of FAST's Reflector Surface using Drones and Computer Vision

Dec 18, 2022
Jianan Li, Shenwang Jiang, Liqiang Song, Peiran Peng, Feng Mu, Hui Li, Peng Jiang, Tingfa Xu

Figure 1 for Automated Optical Inspection of FAST's Reflector Surface using Drones and Computer Vision
Figure 2 for Automated Optical Inspection of FAST's Reflector Surface using Drones and Computer Vision
Figure 3 for Automated Optical Inspection of FAST's Reflector Surface using Drones and Computer Vision
Figure 4 for Automated Optical Inspection of FAST's Reflector Surface using Drones and Computer Vision

The Five-hundred-meter Aperture Spherical radio Telescope (FAST) is the world's largest single-dish radio telescope. Its large reflecting surface achieves unprecedented sensitivity but is prone to damage, such as dents and holes, caused by naturally-occurring falling objects. Hence, the timely and accurate detection of surface defects is crucial for FAST's stable operation. Conventional manual inspection involves human inspectors climbing up and examining the large surface visually, a time-consuming and potentially unreliable process. To accelerate the inspection process and increase its accuracy, this work makes the first step towards automating the inspection of FAST by integrating deep-learning techniques with drone technology. First, a drone flies over the surface along a predetermined route. Since surface defects significantly vary in scale and show high inter-class similarity, directly applying existing deep detectors to detect defects on the drone imagery is highly prone to missing and misidentifying defects. As a remedy, we introduce cross-fusion, a dedicated plug-in operation for deep detectors that enables the adaptive fusion of multi-level features in a point-wise selective fashion, depending on local defect patterns. Consequently, strong semantics and fine-grained details are dynamically fused at different positions to support the accurate detection of defects of various scales and types. Our AI-powered drone-based automated inspection is time-efficient, reliable, and has good accessibility, which guarantees the long-term and stable operation of FAST.

Viaarxiv icon

Dynamic Loss For Robust Learning

Nov 22, 2022
Shenwang Jiang, Jianan Li, Jizhou Zhang, Ying Wang, Tingfa Xu

Figure 1 for Dynamic Loss For Robust Learning
Figure 2 for Dynamic Loss For Robust Learning
Figure 3 for Dynamic Loss For Robust Learning
Figure 4 for Dynamic Loss For Robust Learning

Label noise and class imbalance commonly coexist in real-world data. Previous works for robust learning, however, usually address either one type of the data biases and underperform when facing them both. To mitigate this gap, this work presents a novel meta-learning based dynamic loss that automatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data. Concretely, our dynamic loss comprises a label corrector and a margin generator, which respectively correct noisy labels and generate additive per-class classification margins by perceiving the underlying data distribution as well as the learning state of the classifier. Equipped with a new hierarchical sampling strategy that enriches a small amount of unbiased metadata with diverse and hard samples, the two components in the dynamic loss are optimized jointly through meta-learning and cultivate the classifier to well adapt to clean and balanced test data. Extensive experiments show our method achieves state-of-the-art accuracy on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision. Code will soon be publicly available.

Viaarxiv icon

FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection

Sep 22, 2022
Xinli Xu, Shaocong Dong, Lihe Ding, Jie Wang, Tingfa Xu, Jianan Li

Figure 1 for FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection
Figure 2 for FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection
Figure 3 for FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection
Figure 4 for FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection

3D object detection with multi-sensors is essential for an accurate and reliable perception system of autonomous driving and robotics. Existing 3D detectors significantly improve the accuracy by adopting a two-stage paradigm which merely relies on LiDAR point clouds for 3D proposal refinement. Though impressive, the sparsity of point clouds, especially for the points far away, making it difficult for the LiDAR-only refinement module to accurately recognize and locate objects.To address this problem, we propose a novel multi-modality two-stage approach named FusionRCNN, which effectively and efficiently fuses point clouds and camera images in the Regions of Interest(RoI). FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from camera in a unified attention mechanism. Specifically, it first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step; then leverages an intra-modality self-attention to enhance the domain-specific features, following by a well-designed cross-attention to fuse the information from two modalities.FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors.Remarkably, FusionRCNN significantly improves the strong SECOND baseline by 6.14% mAP on Waymo, and outperforms competing two-stage approaches. Code will be released soon at https://github.com/xxlbigbrother/Fusion-RCNN.

* 7 pages, 3 figures 
Viaarxiv icon

RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-lesion Segmentation

Jan 26, 2022
Shiqi Huang, Jianan Li, Yuze Xiao, Ning Shen, Tingfa Xu

Figure 1 for RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-lesion Segmentation
Figure 2 for RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-lesion Segmentation
Figure 3 for RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-lesion Segmentation
Figure 4 for RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-lesion Segmentation

Automatic diabetic retinopathy (DR) lesions segmentation makes great sense of assisting ophthalmologists in diagnosis. Although many researches have been conducted on this task, most prior works paid too much attention to the designs of networks instead of considering the pathological association for lesions. Through investigating the pathogenic causes of DR lesions in advance, we found that certain lesions are closed to specific vessels and present relative patterns to each other. Motivated by the observation, we propose a relation transformer block (RTB) to incorporate attention mechanisms at two main levels: a self-attention transformer exploits global dependencies among lesion features, while a cross-attention transformer allows interactions between lesion and vessel features by integrating valuable vascular information to alleviate ambiguity in lesion detection caused by complex fundus structures. In addition, to capture the small lesion patterns first, we propose a global transformer block (GTB) which preserves detailed information in deep network. By integrating the above blocks of dual-branches, our network segments the four kinds of lesions simultaneously. Comprehensive experiments on IDRiD and DDR datasets well demonstrate the superiority of our approach, which achieves competitive performance compared to state-of-the-arts.

* IEEE Transactions on Medical Imaging 
Viaarxiv icon

Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data

Dec 30, 2021
Shenwang Jiang, Jianan Li, Ying Wang, Bo Huang, Zhang Zhang, Tingfa Xu

Figure 1 for Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data
Figure 2 for Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data
Figure 3 for Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data
Figure 4 for Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data

Corrupted labels and class imbalance are commonly encountered in practically collected training data, which easily leads to over-fitting of deep neural networks (DNNs). Existing approaches alleviate these issues by adopting a sample re-weighting strategy, which is to re-weight sample by designing weighting function. However, it is only applicable for training data containing only either one type of data biases. In practice, however, biased samples with corrupted labels and of tailed classes commonly co-exist in training data. How to handle them simultaneously is a key but under-explored problem. In this paper, we find that these two types of biased samples, though have similar transient loss, have distinguishable trend and characteristics in loss curves, which could provide valuable priors for sample weight assignment. Motivated by this, we delve into the loss curves and propose a novel probe-and-allocate training strategy: In the probing stage, we train the network on the whole biased training data without intervention, and record the loss curve of each sample as an additional attribute; In the allocating stage, we feed the resulting attribute to a newly designed curve-perception network, named CurveNet, to learn to identify the bias type of each sample and assign proper weights through meta-learning adaptively. The training speed of meta learning also blocks its application. To solve it, we propose a method named skip layer meta optimization (SLMO) to accelerate training speed by skipping the bottom layers. Extensive synthetic and real experiments well validate the proposed method, which achieves state-of-the-art performance on multiple challenging benchmarks.

* Accepted by AAAI-2022 
Viaarxiv icon

PAPooling: Graph-based Position Adaptive Aggregation of Local Geometry in Point Clouds

Nov 28, 2021
Jie Wang, Jianan Li, Lihe Ding, Ying Wang, Tingfa Xu

Figure 1 for PAPooling: Graph-based Position Adaptive Aggregation of Local Geometry in Point Clouds
Figure 2 for PAPooling: Graph-based Position Adaptive Aggregation of Local Geometry in Point Clouds
Figure 3 for PAPooling: Graph-based Position Adaptive Aggregation of Local Geometry in Point Clouds
Figure 4 for PAPooling: Graph-based Position Adaptive Aggregation of Local Geometry in Point Clouds

Fine-grained geometry, captured by aggregation of point features in local regions, is crucial for object recognition and scene understanding in point clouds. Nevertheless, existing preeminent point cloud backbones usually incorporate max/average pooling for local feature aggregation, which largely ignores points' positional distribution, leading to inadequate assembling of fine-grained structures. To mitigate this bottleneck, we present an efficient alternative to max pooling, Position Adaptive Pooling (PAPooling), that explicitly models spatial relations among local points using a novel graph representation, and aggregates features in a position adaptive manner, enabling position-sensitive representation of aggregated features. Specifically, PAPooling consists of two key steps, Graph Construction and Feature Aggregation, respectively in charge of constructing a graph with edges linking the center point with every neighboring point in a local region to map their relative positional information to channel-wise attentive weights, and adaptively aggregating local point features based on the generated weights through Graph Convolution Network (GCN). PAPooling is simple yet effective, and flexible enough to be ready to use for different popular backbones like PointNet++ and DGCNN, as a plug-andplay operator. Extensive experiments on various tasks ranging from 3D shape classification, part segmentation to scene segmentation well demonstrate that PAPooling can significantly improve predictive accuracy, while with minimal extra computational overhead. Code will be released.

* 9 pages, 6 figures 
Viaarxiv icon