Alert button
Picture for Huifang Li

Huifang Li

Alert button

A physics-constrained machine learning method for mapping gapless land surface temperature

Jul 03, 2023
Jun Ma, Huanfeng Shen, Menghui Jiang, Liupeng Lin, Chunlei Meng, Chao Zeng, Huifang Li, Penghai Wu

Figure 1 for A physics-constrained machine learning method for mapping gapless land surface temperature
Figure 2 for A physics-constrained machine learning method for mapping gapless land surface temperature
Figure 3 for A physics-constrained machine learning method for mapping gapless land surface temperature
Figure 4 for A physics-constrained machine learning method for mapping gapless land surface temperature

More accurate, spatio-temporally, and physically consistent LST estimation has been a main interest in Earth system research. Developing physics-driven mechanism models and data-driven machine learning (ML) models are two major paradigms for gapless LST estimation, which have their respective advantages and disadvantages. In this paper, a physics-constrained ML model, which combines the strengths in the mechanism model and ML model, is proposed to generate gapless LST with physical meanings and high accuracy. The hybrid model employs ML as the primary architecture, under which the input variable physical constraints are incorporated to enhance the interpretability and extrapolation ability of the model. Specifically, the light gradient-boosting machine (LGBM) model, which uses only remote sensing data as input, serves as the pure ML model. Physical constraints (PCs) are coupled by further incorporating key Community Land Model (CLM) forcing data (cause) and CLM simulation data (effect) as inputs into the LGBM model. This integration forms the PC-LGBM model, which incorporates surface energy balance (SEB) constraints underlying the data in CLM-LST modeling within a biophysical framework. Compared with a pure physical method and pure ML methods, the PC-LGBM model improves the prediction accuracy and physical interpretability of LST. It also demonstrates a good extrapolation ability for the responses to extreme weather cases, suggesting that the PC-LGBM model enables not only empirical learning from data but also rationally derived from theory. The proposed method represents an innovative way to map accurate and physically interpretable gapless LST, and could provide insights to accelerate knowledge discovery in land surface processes and data mining in geographical parameter estimation.

Viaarxiv icon

Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges

Jan 16, 2023
Yushan Han, Hui Zhang, Huifang Li, Yi Jin, Congyan Lang, Yidong Li

Figure 1 for Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges
Figure 2 for Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges
Figure 3 for Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges
Figure 4 for Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges

Collaborative perception is essential to address occlusion and sensor failure issues in autonomous driving. In recent years, deep learning on collaborative perception has become even thriving, with numerous methods have been proposed. Although some works have reviewed and analyzed the basic architecture and key components in this field, there is still a lack of reviews on systematical collaboration modules in perception networks and large-scale collaborative perception datasets. The primary goal of this work is to address the abovementioned issues and provide a comprehensive review of recent achievements in this field. First, we introduce fundamental technologies and collaboration schemes. Following that, we provide an overview of practical collaborative perception methods and systematically summarize the collaboration modules in networks to improve collaboration efficiency and performance while also ensuring collaboration robustness and safety. Then, we present large-scale public datasets and summarize quantitative results on these benchmarks. Finally, we discuss the remaining challenges and promising future research directions.

* 19 pages, 8 figures 
Viaarxiv icon

CATNet: Context AggregaTion Network for Instance Segmentation in Remote Sensing Images

Nov 22, 2021
Ye Liu, Huifang Li, Chao Hu, Shuang Luo, Huanfeng Shen, Chang Wen Chen

Figure 1 for CATNet: Context AggregaTion Network for Instance Segmentation in Remote Sensing Images
Figure 2 for CATNet: Context AggregaTion Network for Instance Segmentation in Remote Sensing Images
Figure 3 for CATNet: Context AggregaTion Network for Instance Segmentation in Remote Sensing Images
Figure 4 for CATNet: Context AggregaTion Network for Instance Segmentation in Remote Sensing Images

The task of instance segmentation in remote sensing images, aiming at performing per-pixel labeling of objects at instance level, is of great importance for various civil applications. Despite previous successes, most existing instance segmentation methods designed for natural images encounter sharp performance degradations when directly applied to top-view remote sensing images. Through careful analysis, we observe that the challenges mainly come from lack of discriminative object features due to severe scale variations, low contrasts, and clustered distributions. In order to address these problems, a novel context aggregation network (CATNet) is proposed to improve the feature extraction process. The proposed model exploits three lightweight plug-and-play modules, namely dense feature pyramid network (DenseFPN), spatial context pyramid (SCP), and hierarchical region of interest extractor (HRoIE), to aggregate global visual context at feature, spatial, and instance domains, respectively. DenseFPN is a multi-scale feature propagation module that establishes more flexible information flows by adopting inter-level residual connections, cross-level dense connections, and feature re-weighting strategy. Leveraging the attention mechanism, SCP further augments the features by aggregating global spatial context into local regions. For each instance, HRoIE adaptively generates RoI features for different downstream tasks. We carry out extensive evaluation of the proposed scheme on the challenging iSAID, DIOR, NWPU VHR-10, and HRSID datasets. The evaluation results demonstrate that the proposed approach outperforms state-of-the-arts with similar computational costs. Code is available at https://github.com/yeliudev/CATNet.

Viaarxiv icon

Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery

Feb 05, 2017
Zhiwei Li, Huanfeng Shen, Huifang Li, Guisong Xia, Paolo Gamba, Liangpei Zhang

Figure 1 for Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery
Figure 2 for Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery
Figure 3 for Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery
Figure 4 for Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery

The wide field of view (WFV) imaging system onboard the Chinese GaoFen-1 (GF-1) optical satellite has a 16-m resolution and four-day revisit cycle for large-scale Earth observation. The advantages of the high temporal-spatial resolution and the wide field of view make the GF-1 WFV imagery very popular. However, cloud cover is an inevitable problem in GF-1 WFV imagery, which influences its precise application. Accurate cloud and cloud shadow detection in GF-1 WFV imagery is quite difficult due to the fact that there are only three visible bands and one near-infrared band. In this paper, an automatic multi-feature combined (MFC) method is proposed for cloud and cloud shadow detection in GF-1 WFV imagery. The MFC algorithm first implements threshold segmentation based on the spectral features and mask refinement based on guided filtering to generate a preliminary cloud mask. The geometric features are then used in combination with the texture features to improve the cloud detection results and produce the final cloud mask. Finally, the cloud shadow mask can be acquired by means of the cloud and shadow matching and follow-up correction process. The method was validated using 108 globally distributed scenes. The results indicate that MFC performs well under most conditions, and the average overall accuracy of MFC cloud detection is as high as 96.8%. In the contrastive analysis with the official provided cloud fractions, MFC shows a significant improvement in cloud fraction estimation, and achieves a high accuracy for the cloud and cloud shadow detection in the GF-1 WFV imagery with fewer spectral bands. The proposed method could be used as a preprocessing step in the future to monitor land-cover change, and it could also be easily extended to other optical satellite imagery which has a similar spectral setting.

* Remote Sensing of Environment, vol. 191, pp.342-358, 2017  
* This manuscript has been accepted for publication in Remote Sensing of Environment, vol. 191, pp.342-358, 2017. (http://www.sciencedirect.com/science/article/pii/S003442571730038X) 
Viaarxiv icon