Alert button
Picture for Xinpeng Li

Xinpeng Li

Alert button

Real3D-AD: A Dataset of Point Cloud Anomaly Detection

Sep 26, 2023
Jiaqi Liu, Guoyang Xie, Ruitao Chen, Xinpeng Li, Jinbao Wang, Yong Liu, Chengjie Wang, Feng Zheng

Figure 1 for Real3D-AD: A Dataset of Point Cloud Anomaly Detection
Figure 2 for Real3D-AD: A Dataset of Point Cloud Anomaly Detection
Figure 3 for Real3D-AD: A Dataset of Point Cloud Anomaly Detection
Figure 4 for Real3D-AD: A Dataset of Point Cloud Anomaly Detection

High-precision point cloud anomaly detection is the gold standard for identifying the defects of advancing machining and precision manufacturing. Despite some methodological advances in this area, the scarcity of datasets and the lack of a systematic benchmark hinder its development. We introduce Real3D-AD, a challenging high-precision point cloud anomaly detection dataset, addressing the limitations in the field. With 1,254 high-resolution 3D items from forty thousand to millions of points for each item, Real3D-AD is the largest dataset for high-precision 3D industrial anomaly detection to date. Real3D-AD surpasses existing 3D anomaly detection datasets available regarding point cloud resolution (0.0010mm-0.0015mm), 360 degree coverage and perfect prototype. Additionally, we present a comprehensive benchmark for Real3D-AD, revealing the absence of baseline methods for high-precision point cloud anomaly detection. To address this, we propose Reg3D-AD, a registration-based 3D anomaly detection method incorporating a novel feature memory bank that preserves local and global representations. Extensive experiments on the Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility and accessibility, we provide the Real3D-AD dataset, benchmark source code, and Reg3D-AD on our website:https://github.com/M-3LAB/Real3D-AD.

Viaarxiv icon

MEFLUT: Unsupervised 1D Lookup Tables for Multi-exposure Image Fusion

Sep 21, 2023
Ting Jiang, Chuan Wang, Xinpeng Li, Ru Li, Haoqiang Fan, Shuaicheng Liu

In this paper, we introduce a new approach for high-quality multi-exposure image fusion (MEF). We show that the fusion weights of an exposure can be encoded into a 1D lookup table (LUT), which takes pixel intensity value as input and produces fusion weight as output. We learn one 1D LUT for each exposure, then all the pixels from different exposures can query 1D LUT of that exposure independently for high-quality and efficient fusion. Specifically, to learn these 1D LUTs, we involve attention mechanism in various dimensions including frame, channel and spatial ones into the MEF task so as to bring us significant quality improvement over the state-of-the-art (SOTA). In addition, we collect a new MEF dataset consisting of 960 samples, 155 of which are manually tuned by professionals as ground-truth for evaluation. Our network is trained by this dataset in an unsupervised manner. Extensive experiments are conducted to demonstrate the effectiveness of all the newly proposed components, and results show that our approach outperforms the SOTA in our and another representative dataset SICE, both qualitatively and quantitatively. Moreover, our 1D LUT approach takes less than 4ms to run a 4K image on a PC GPU. Given its high quality, efficiency and robustness, our method has been shipped into millions of Android mobiles across multiple brands world-wide. Code is available at: https://github.com/Hedlen/MEFLUT.

Viaarxiv icon

SAM-IQA: Can Segment Anything Boost Image Quality Assessment?

Jul 10, 2023
Xinpeng Li, Ting Jiang, Haoqiang Fan, Shuaicheng Liu

Figure 1 for SAM-IQA: Can Segment Anything Boost Image Quality Assessment?
Figure 2 for SAM-IQA: Can Segment Anything Boost Image Quality Assessment?
Figure 3 for SAM-IQA: Can Segment Anything Boost Image Quality Assessment?
Figure 4 for SAM-IQA: Can Segment Anything Boost Image Quality Assessment?

Image Quality Assessment (IQA) is a challenging task that requires training on massive datasets to achieve accurate predictions. However, due to the lack of IQA data, deep learning-based IQA methods typically rely on pre-trained networks trained on massive datasets as feature extractors to enhance their generalization ability, such as the ResNet network trained on ImageNet. In this paper, we utilize the encoder of Segment Anything, a recently proposed segmentation model trained on a massive dataset, for high-level semantic feature extraction. Most IQA methods are limited to extracting spatial-domain features, while frequency-domain features have been shown to better represent noise and blur. Therefore, we leverage both spatial-domain and frequency-domain features by applying Fourier and standard convolutions on the extracted features, respectively. Extensive experiments are conducted to demonstrate the effectiveness of all the proposed components, and results show that our approach outperforms the state-of-the-art (SOTA) in four representative datasets, both qualitatively and quantitatively. Our experiments confirm the powerful feature extraction capabilities of Segment Anything and highlight the value of combining spatial-domain and frequency-domain features in IQA tasks. Code: https://github.com/Hedlen/SAM-IQA

Viaarxiv icon

DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution

Apr 14, 2023
Lei Yu, Xinpeng Li, Youwei Li, Ting Jiang, Qi Wu, Haoqiang Fan, Shuaicheng Liu

Figure 1 for DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution
Figure 2 for DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution
Figure 3 for DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution
Figure 4 for DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution

Efficient deep learning-based approaches have achieved remarkable performance in single image super-resolution. However, recent studies on efficient super-resolution have mainly focused on reducing the number of parameters and floating-point operations through various network designs. Although these methods can decrease the number of parameters and floating-point operations, they may not necessarily reduce actual running time. To address this issue, we propose a novel multi-stage lightweight network boosting method, which can enable lightweight networks to achieve outstanding performance. Specifically, we leverage enhanced high-resolution output as additional supervision to improve the learning ability of lightweight student networks. Upon convergence of the student network, we further simplify our network structure to a more lightweight level using reparameterization techniques and iterative network pruning. Meanwhile, we adopt an effective lightweight network training strategy that combines multi-anchor distillation and progressive learning, enabling the lightweight network to achieve outstanding performance. Ultimately, our proposed method achieves the fastest inference time among all participants in the NTIRE 2023 efficient super-resolution challenge while maintaining competitive super-resolution performance. Additionally, extensive experiments are conducted to demonstrate the effectiveness of the proposed components. The results show that our approach achieves comparable performance in representative dataset DIV2K, both qualitatively and quantitatively, with faster inference and fewer number of network parameters.

Viaarxiv icon

Rail Detection: An Efficient Row-based Network and A New Benchmark

Apr 12, 2023
Xinpeng Li, Xiaojiang Peng

Figure 1 for Rail Detection: An Efficient Row-based Network and A New Benchmark
Figure 2 for Rail Detection: An Efficient Row-based Network and A New Benchmark
Figure 3 for Rail Detection: An Efficient Row-based Network and A New Benchmark
Figure 4 for Rail Detection: An Efficient Row-based Network and A New Benchmark

Rail detection, essential for railroad anomaly detection, aims to identify the railroad region in video frames. Although various studies on rail detection exist, neither an open benchmark nor a high-speed network is available in the community, making algorithm comparison and development difficult. Inspired by the growth of lane detection, we propose a rail database and a row-based rail detection method. In detail, we make several contributions: (i) We present a real-world railway dataset, Rail-DB, with 7432 pairs of images and annotations. The images are collected from different situations in lighting, road structures, and views. The rails are labeled with polylines, and the images are categorized into nine scenes. The Rail-DB is expected to facilitate the improvement of rail detection algorithms. (ii) We present an efficient row-based rail detection method, Rail-Net, containing a lightweight convolutional backbone and an anchor classifier. Specifically, we formulate the process of rail detection as a row-based selecting problem. This strategy reduces the computational cost compared to alternative segmentation methods. (iii) We evaluate the Rail-Net on Rail-DB with extensive experiments, including cross-scene settings and network backbones ranging from ResNet to Vision Transformers. Our method achieves promising performance in terms of both speed and accuracy. Notably, a lightweight version could achieve 92.77% accuracy and 312 frames per second. The Rail-Net outperforms the traditional method by 50.65% and the segmentation one by 5.86%. The database and code are available at: https://github.com/Sampson-Lee/Rail-Detection.

* Accepted by ACMMM 2022 
Viaarxiv icon

AU-Aware Vision Transformers for Biased Facial Expression Recognition

Nov 12, 2022
Shuyi Mao, Xinpeng Li, Qingyang Wu, Xiaojiang Peng

Figure 1 for AU-Aware Vision Transformers for Biased Facial Expression Recognition
Figure 2 for AU-Aware Vision Transformers for Biased Facial Expression Recognition
Figure 3 for AU-Aware Vision Transformers for Biased Facial Expression Recognition
Figure 4 for AU-Aware Vision Transformers for Biased Facial Expression Recognition

Studies have proven that domain bias and label bias exist in different Facial Expression Recognition (FER) datasets, making it hard to improve the performance of a specific dataset by adding other datasets. For the FER bias issue, recent researches mainly focus on the cross-domain issue with advanced domain adaption algorithms. This paper addresses another problem: how to boost FER performance by leveraging cross-domain datasets. Unlike the coarse and biased expression label, the facial Action Unit (AU) is fine-grained and objective suggested by psychological studies. Motivated by this, we resort to the AU information of different FER datasets for performance boosting and make contributions as follows. First, we experimentally show that the naive joint training of multiple FER datasets is harmful to the FER performance of individual datasets. We further introduce expression-specific mean images and AU cosine distances to measure FER dataset bias. This novel measurement shows consistent conclusions with experimental degradation of joint training. Second, we propose a simple yet conceptually-new framework, AU-aware Vision Transformer (AU-ViT). It improves the performance of individual datasets by jointly training auxiliary datasets with AU or pseudo-AU labels. We also find that the AU-ViT is robust to real-world occlusions. Moreover, for the first time, we prove that a carefully-initialized ViT achieves comparable performance to advanced deep convolutional networks. Our AU-ViT achieves state-of-the-art performance on three popular datasets, namely 91.10% on RAF-DB, 65.59% on AffectNet, and 90.15% on FERPlus. The code and models will be released soon.

Viaarxiv icon

AU-Supervised Convolutional Vision Transformers for Synthetic Facial Expression Recognition

Jul 22, 2022
Shuyi Mao, Xinpeng Li, Junyao Chen, Xiaojiang Peng

Figure 1 for AU-Supervised Convolutional Vision Transformers for Synthetic Facial Expression Recognition
Figure 2 for AU-Supervised Convolutional Vision Transformers for Synthetic Facial Expression Recognition

The paper describes our proposed methodology for the six basic expression classification track of Affective Behavior Analysis in-the-wild (ABAW) Competition 2022. In Learing from Synthetic Data(LSD) task, facial expression recognition (FER) methods aim to learn the representation of expression from the artificially generated data and generalise to real data. Because of the ambiguous of the synthetic data and the objectivity of the facial Action Unit (AU), we resort to the AU information for performance boosting, and make contributions as follows. First, to adapt the model to synthetic scenarios, we use the knowledge from pre-trained large-scale face recognition data. Second, we propose a conceptually-new framework, termed as AU-Supervised Convolutional Vision Transformers (AU-CVT), which clearly improves the performance of FER by jointly training auxiliary datasets with AU or pseudo AU labels. Our AU-CVT achieved F1 score as $0.6863$, accuracy as $0.7433$ on the validation set. The source code of our work is publicly available online: https://github.com/msy1412/ABAW4

Viaarxiv icon

NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results

May 25, 2022
Eduardo Pérez-Pellitero, Sibi Catley-Chandar, Richard Shaw, Aleš Leonardis, Radu Timofte, Zexin Zhang, Cen Liu, Yunbo Peng, Yue Lin, Gaocheng Yu, Jin Zhang, Zhe Ma, Hongbin Wang, Xiangyu Chen, Xintao Wang, Haiwei Wu, Lin Liu, Chao Dong, Jiantao Zhou, Qingsen Yan, Song Zhang, Weiye Chen, Yuhang Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, Dong Gong, Dan Zhu, Mengdi Sun, Guannan Chen, Yang Hu, Haowei Li, Baozhu Zou, Zhen Liu, Wenjie Lin, Ting Jiang, Chengzhi Jiang, Xinpeng Li, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Juan Marín-Vega, Michael Sloth, Peter Schneider-Kamp, Richard Röttger, Chunyang Li, Long Bao, Gang He, Ziyao Xu, Li Xu, Gen Zhan, Ming Sun, Xing Wen, Junlin Li, Jinjing Li, Chenghua Li, Ruipeng Gang, Fangya Li, Chenming Liu, Shuang Feng, Fei Lei, Rui Liu, Junxiang Ruan, Tianhong Dai, Wei Li, Zhan Lu, Hengyan Liu, Peian Huang, Guangyu Ren, Yonglin Luo, Chang Liu, Qiang Tu, Fangya Li, Ruipeng Gang, Chenghua Li, Jinjing Li, Sai Ma, Chenming Liu, Yizhen Cao, Steven Tel, Barthelemy Heyrman, Dominique Ginhac, Chul Lee, Gahyeon Kim, Seonghyun Park, An Gia Vien, Truong Thanh Nhat Mai, Howoon Yoon, Tu Vo, Alexander Holston, Sheir Zaheer, Chan Y. Park

Figure 1 for NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
Figure 2 for NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
Figure 3 for NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
Figure 4 for NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results

This paper reviews the challenge on constrained high dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2022. This manuscript focuses on the competition set-up, datasets, the proposed methods and their results. The challenge aims at estimating an HDR image from multiple respective low dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise. The challenge is composed of two tracks with an emphasis on fidelity and complexity constraints: In Track 1, participants are asked to optimize objective fidelity scores while imposing a low-complexity constraint (i.e. solutions can not exceed a given number of operations). In Track 2, participants are asked to minimize the complexity of their solutions while imposing a constraint on fidelity scores (i.e. solutions are required to obtain a higher fidelity score than the prescribed baseline). Both tracks use the same data and metrics: Fidelity is measured by means of PSNR with respect to a ground-truth HDR image (computed both directly and with a canonical tonemapping operation), while complexity metrics include the number of Multiply-Accumulate (MAC) operations and runtime (in seconds).

* Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022  
* CVPR Workshops 2022. 15 pages, 21 figures, 2 tables 
Viaarxiv icon

ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging

May 22, 2021
Zhen Liu, Wenjie Lin, Xinpeng Li, Qing Rao, Ting Jiang, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu

Figure 1 for ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging
Figure 2 for ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging
Figure 3 for ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging
Figure 4 for ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging

In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet. This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering. To address the former, we adopt a spatial attention module to adaptively select the most appropriate regions of various exposure low dynamic range (LDR) images for fusion. For the latter one, we propose to align the gamma-corrected images in the feature-level with a Pyramid, Cascading and Deformable (PCD) alignment module. The proposed ADNet shows state-of-the-art performance compared with previous methods, achieving a PSNR-$l$ of 39.4471 and a PSNR-$\mu$ of 37.6359 in NTIRE 2021 Multi-Frame HDR Challenge.

* Accepted by CVPRW 2021 
Viaarxiv icon

Improving Generalized Zero-Shot Learning by Semantic Discriminator

Jun 11, 2020
Xinpeng Li

Figure 1 for Improving Generalized Zero-Shot Learning by Semantic Discriminator
Figure 2 for Improving Generalized Zero-Shot Learning by Semantic Discriminator
Figure 3 for Improving Generalized Zero-Shot Learning by Semantic Discriminator
Figure 4 for Improving Generalized Zero-Shot Learning by Semantic Discriminator

It is a recognized fact that the classification accuracy of unseen classes in the setting of Generalized Zero-Shot Learning (GZSL) is much lower than that of traditional Zero-Shot Leaning (ZSL). One of the reasons is that an instance is always misclassified to the wrong domain. Here we refer to the seen and unseen classes as two domains respectively. We propose a new approach to distinguish whether the instances come from the seen or unseen classes. First the visual feature of instance is projected into the semantic space. Then the absolute norm difference between the projected semantic vector and the class semantic embedding vector, and the minimum distance between the projected semantic vectors and the semantic embedding vectors of the seen classes are used as discrimination basis. This approach is termed as SD (Semantic Discriminator) because domain judgement of instance is performed in the semantic space. Our approach can be combined with any existing ZSL method and fully supervision classification model to form a new GZSL method. Furthermore, our approach is very simple and does not need any fixed parameters.

Viaarxiv icon