Alert button
Picture for Jinxiu Liang

Jinxiu Liang

Alert button

NeuralMPS: Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition

Nov 28, 2022
Jipeng Lv, Heng Guo, Guanying Chen, Jinxiu Liang, Boxin Shi

Figure 1 for NeuralMPS: Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition
Figure 2 for NeuralMPS: Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition
Figure 3 for NeuralMPS: Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition
Figure 4 for NeuralMPS: Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition

Multispectral photometric stereo(MPS) aims at recovering the surface normal of a scene from a single-shot multispectral image captured under multispectral illuminations. Existing MPS methods adopt the Lambertian reflectance model to make the problem tractable, but it greatly limits their application to real-world surfaces. In this paper, we propose a deep neural network named NeuralMPS to solve the MPS problem under general non-Lambertian spectral reflectances. Specifically, we present a spectral reflectance decomposition(SRD) model to disentangle the spectral reflectance into geometric components and spectral components. With this decomposition, we show that the MPS problem for surfaces with a uniform material is equivalent to the conventional photometric stereo(CPS) with unknown light intensities. In this way, NeuralMPS reduces the difficulty of the non-Lambertian MPS problem by leveraging the well-studied non-Lambertian CPS methods. Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method.

Viaarxiv icon

Recurrent Exposure Generation for Low-Light Face Detection

Jul 21, 2020
Jinxiu Liang, Jingwen Wang, Yuhui Quan, Tianyi Chen, Jiaying Liu, Haibin Ling, Yong Xu

Figure 1 for Recurrent Exposure Generation for Low-Light Face Detection
Figure 2 for Recurrent Exposure Generation for Low-Light Face Detection
Figure 3 for Recurrent Exposure Generation for Low-Light Face Detection
Figure 4 for Recurrent Exposure Generation for Low-Light Face Detection

Face detection from low-light images is challenging due to limited photos and inevitable noise, which, to make the task even harder, are often spatially unevenly distributed. A natural solution is to borrow the idea from multi-exposure, which captures multiple shots to obtain well-exposed images under challenging conditions. High-quality implementation/approximation of multi-exposure from a single image is however nontrivial. Fortunately, as shown in this paper, neither is such high-quality necessary since our task is face detection rather than image enhancement. Specifically, we propose a novel Recurrent Exposure Generation (REG) module and couple it seamlessly with a Multi-Exposure Detection (MED) module, and thus significantly improve face detection performance by effectively inhibiting non-uniform illumination and noise issues. REG produces progressively and efficiently intermediate images corresponding to various exposure settings, and such pseudo-exposures are then fused by MED to detect faces across different lighting conditions. The proposed method, named REGDet, is the first `detection-with-enhancement' framework for low-light face detection. It not only encourages rich interaction and feature fusion across different illumination levels, but also enables effective end-to-end learning of the REG component to be better tailored for face detection. Moreover, as clearly shown in our experiments, REG can be flexibly coupled with different face detectors without extra low/normal-light image pairs for training. We tested REGDet on the DARK FACE low-light face benchmark with thorough ablation study, where REGDet outperforms previous state-of-the-arts by a significant margin, with only negligible extra parameters.

* 11 pages 
Viaarxiv icon

Deep Bilateral Retinex for Low-Light Image Enhancement

Jul 04, 2020
Jinxiu Liang, Yong Xu, Yuhui Quan, Jingwen Wang, Haibin Ling, Hui Ji

Figure 1 for Deep Bilateral Retinex for Low-Light Image Enhancement
Figure 2 for Deep Bilateral Retinex for Low-Light Image Enhancement
Figure 3 for Deep Bilateral Retinex for Low-Light Image Enhancement
Figure 4 for Deep Bilateral Retinex for Low-Light Image Enhancement

Low-light images, i.e. the images captured in low-light conditions, suffer from very poor visibility caused by low contrast, color distortion and significant measurement noise. Low-light image enhancement is about improving the visibility of low-light images. As the measurement noise in low-light images is usually significant yet complex with spatially-varying characteristic, how to handle the noise effectively is an important yet challenging problem in low-light image enhancement. Based on the Retinex decomposition of natural images, this paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise. The basic idea is to train a neural network to generate a set of pixel-wise operators for simultaneously predicting the noise and the illumination layer, where the operators are defined in the bilateral space. Such an integrated approach allows us to have an accurate prediction of the reflectance layer in the presence of significant spatially-varying measurement noise. Extensive experiments on several benchmark datasets have shown that the proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.

* 15 pages 
Viaarxiv icon