Alert button
Picture for Zhenming Peng

Zhenming Peng

Alert button

VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model

May 25, 2023
Tian Pu, Shuhang Wang, Zhenming Peng, Qingsong Zhu

Figure 1 for VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model
Figure 2 for VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model
Figure 3 for VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model
Figure 4 for VEDA: Uneven light image enhancement via a vision-based exploratory data analysis model

Uneven light image enhancement is a highly demanded task in many industrial image processing applications. Many existing enhancement methods using physical lighting models or deep-learning techniques often lead to unnatural results. This is mainly because: 1) the assumptions and priors made by the physical lighting model (PLM) based approaches are often violated in most natural scenes, and 2) the training datasets or loss functions used by deep-learning technique based methods cannot handle the various lighting scenarios in the real world well. In this paper, we propose a novel vision-based exploratory data analysis model (VEDA) for uneven light image enhancement. Our method is conceptually simple yet effective. A given image is first decomposed into a contrast image that preserves most of the perceptually important scene details, and a residual image that preserves the lighting variations. After achieving this decomposition at multiple scales using a retinal model that simulates the neuron response to light, the enhanced result at each scale can be obtained by manipulating the two images and recombining them. Then, a weighted averaging strategy based on the residual image is designed to obtain the output image by combining enhanced results at multiple scales. A similar weighting strategy can also be leveraged to reconcile noise suppression and detail preservation. Extensive experiments on different image datasets demonstrate that the proposed method can achieve competitive results in its simplicity and effectiveness compared with state-of-the-art methods. It does not require any explicit assumptions and priors about the scene imaging process, nor iteratively solving any optimization functions or any learning procedures.

Viaarxiv icon

LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing

Dec 18, 2022
Tianfang Zhang, Lei Li, Christian Igel, Stefan Oehmcke, Fabian Gieseke, Zhenming Peng

Figure 1 for LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing
Figure 2 for LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing
Figure 3 for LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing
Figure 4 for LR-CSNet: Low-Rank Deep Unfolding Network for Image Compressive Sensing

Deep unfolding networks (DUNs) have proven to be a viable approach to compressive sensing (CS). In this work, we propose a DUN called low-rank CS network (LR-CSNet) for natural image CS. Real-world image patches are often well-represented by low-rank approximations. LR-CSNet exploits this property by adding a low-rank prior to the CS optimization task. We derive a corresponding iterative optimization procedure using variable splitting, which is then translated to a new DUN architecture. The architecture uses low-rank generation modules (LRGMs), which learn low-rank matrix factorizations, as well as gradient descent and proximal mappings (GDPMs), which are proposed to extract high-frequency features to refine image details. In addition, the deep features generated at each reconstruction stage in the DUN are transferred between stages to boost the performance. Our extensive experiments on three widely considered datasets demonstrate the promising performance of LR-CSNet compared to state-of-the-art methods in natural image CS.

Viaarxiv icon

AGPCNet: Attention-Guided Pyramid Context Networks for Infrared Small Target Detection

Nov 05, 2021
Tianfang Zhang, Siying Cao, Tian Pu, Zhenming Peng

Figure 1 for AGPCNet: Attention-Guided Pyramid Context Networks for Infrared Small Target Detection
Figure 2 for AGPCNet: Attention-Guided Pyramid Context Networks for Infrared Small Target Detection
Figure 3 for AGPCNet: Attention-Guided Pyramid Context Networks for Infrared Small Target Detection
Figure 4 for AGPCNet: Attention-Guided Pyramid Context Networks for Infrared Small Target Detection

Infrared small target detection is an important problem in many fields such as earth observation, military reconnaissance, disaster relief, and has received widespread attention recently. This paper presents the Attention-Guided Pyramid Context Network (AGPCNet) algorithm. Its main components are an Attention-Guided Context Block (AGCB), a Context Pyramid Module (CPM), and an Asymmetric Fusion Module (AFM). AGCB divides the feature map into patches to compute local associations and uses Global Context Attention (GCA) to compute global associations between semantics, CPM integrates features from multi-scale AGCBs, and AFM integrates low-level and deep-level semantics from a feature-fusion perspective to enhance the utilization of features. The experimental results illustrate that AGPCNet has achieved new state-of-the-art performance on two available infrared small target datasets. The source codes are available at https://github.com/Tianfang-Zhang/AGPCNet.

* 12 pages, 13 figures, 8 tables 
Viaarxiv icon

Total Variation with Overlapping Group Sparsity and Lp Quasinorm for Infrared Image Deblurring under Salt-and-Pepper Noise

Jan 01, 2019
Xingguo Liu, Yinping Chen, Zhenming Peng, Juan Wu

Because of the limitations of the infrared imaging principle and the properties of infrared imaging systems, infrared images have some drawbacks, including a lack of details, indistinct edges, and a large amount of salt-andpepper noise. To improve the sparse characteristics of the image while maintaining the image edges and weakening staircase artifacts, this paper proposes a method that uses the Lp quasinorm instead of the L1 norm and for infrared image deblurring with an overlapping group sparse total variation method. The Lp quasinorm introduces another degree of freedom, better describes image sparsity characteristics, and improves image restoration. Furthermore, we adopt the accelerated alternating direction method of multipliers and fast Fourier transform theory in the proposed method to improve the efficiency and robustness of our algorithm. Experiments show that under different conditions for blur and salt-and-pepper noise, the proposed method leads to excellent performance in terms of objective evaluation and subjective visual results.

Viaarxiv icon