Alert button
Picture for Zhiqiang Yuan

Zhiqiang Yuan

Alert button

Frequency Compensated Diffusion Model for Real-scene Dehazing

Aug 21, 2023
Jing Wang, Songtao Wu, Kuanhong Xu, Zhiqiang Yuan

Figure 1 for Frequency Compensated Diffusion Model for Real-scene Dehazing
Figure 2 for Frequency Compensated Diffusion Model for Real-scene Dehazing
Figure 3 for Frequency Compensated Diffusion Model for Real-scene Dehazing
Figure 4 for Frequency Compensated Diffusion Model for Real-scene Dehazing

Due to distribution shift, deep learning based methods for image dehazing suffer from performance degradation when applied to real-world hazy images. In this paper, we consider a dehazing framework based on conditional diffusion models for improved generalization to real haze. First, we find that optimizing the training objective of diffusion models, i.e., Gaussian noise vectors, is non-trivial. The spectral bias of deep networks hinders the higher frequency modes in Gaussian vectors from being learned and hence impairs the reconstruction of image details. To tackle this issue, we design a network unit, named Frequency Compensation block (FCB), with a bank of filters that jointly emphasize the mid-to-high frequencies of an input signal. We demonstrate that diffusion models with FCB achieve significant gains in both perceptual and distortion metrics. Second, to further boost the generalization performance, we propose a novel data synthesis pipeline, HazeAug, to augment haze in terms of degree and diversity. Within the framework, a solid baseline for blind dehazing is set up where models are trained on synthetic hazy-clean pairs, and directly generalize to real data. Extensive evaluations show that the proposed dehazing diffusion model significantly outperforms state-of-the-art methods on real-world images.

* 16 pages 
Viaarxiv icon

Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation

Aug 02, 2023
Zhiqiang Yuan, Junwei Liu, Qiancheng Zi, Mingwei Liu, Xin Peng, Yiling Lou

In this work, we evaluate 10 open-source instructed LLMs on four representative code comprehension and generation tasks. We have the following main findings. First, for the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks and sometimes even better than small SOTA models specifically fine-tuned on each downstream task. We also find that larger instructed LLMs are not always better on code-related tasks. Second, for the few-shot setting, we find that adding demonstration examples substantially helps instructed LLMs perform better on most code comprehension and generation tasks; however, the examples would sometimes induce unstable or even worse performance. Furthermore, we find widely-used BM25-based shot selection strategy significantly outperforms the basic random selection or fixed selection only on generation problems. Third, for the fine-tuning setting, we find that fine-tuning could further improve the model performance on downstream code comprehension and generation tasks compared to the zero-shot/one-shot performance. In addition, after being fine-tuned on the same downstream task dataset, instructed LLMs outperform both the small SOTA models and similar-scaled LLMs without instruction tuning. Based on our findings, we further present practical implications on model and usage recommendation, performance and cost trade-offs, and future direction.

Viaarxiv icon

Channel Measurement, Modeling, and Simulation for 6G: A Survey and Tutorial

May 26, 2023
Jianhua Zhang, Jiaxin Lin, Pan Tang, Yuxiang Zhang, Huixin Xu, Tianyang Gao, Haiyang Miao, Zeyong Chai, Zhengfu Zhou, Yi Li, Huiwen Gong, Yameng Liu, Zhiqiang Yuan, Ximan Liu, Lei Tian, Shaoshi Yang, Liang Xia, Guangyi Liu, Ping Zhang

Technology research and standardization work of sixth generation (6G) has been carried out worldwide. Channel research is the prerequisite of 6G technology evaluation and optimization. This paper presents a survey and tutorial on channel measurement, modeling, and simulation for 6G. We first highlight the challenges of channel for 6G systems, including higher frequency band, extremely large antenna array, new technology combinations, and diverse application scenarios. A review of channel measurement and modeling for four possible 6G enabling technologies is then presented, i.e., terahertz communication, massive multiple-input multiple-output communication, joint communication and sensing, and reconfigurable intelligent surface. Finally, we introduce a 6G channel simulation platform and provide examples of its implementation. The goal of this paper is to help both professionals and non-professionals know the progress of 6G channel research, understand the 6G channel model, and use it for 6G simulation.

* 37 pages,30 figures 
Viaarxiv icon

A Shared Cluster-based Stochastic Channel Model for Joint Communication and Sensing Systems

Nov 12, 2022
Yameng Liu, Jianhua Zhang, Yuxiang Zhang, Zhiqiang Yuan, Guangyi Liu

Figure 1 for A Shared Cluster-based Stochastic Channel Model for Joint Communication and Sensing Systems
Figure 2 for A Shared Cluster-based Stochastic Channel Model for Joint Communication and Sensing Systems
Figure 3 for A Shared Cluster-based Stochastic Channel Model for Joint Communication and Sensing Systems
Figure 4 for A Shared Cluster-based Stochastic Channel Model for Joint Communication and Sensing Systems

Joint communication and sensing (JCAS) has been recognized as a promising technology in the sixth generation (6G) communication. A realistic channel model is a prerequisite for designing JCAS systems. Most existing channel models independently generate the communication and sensing channels under the same framework. However, due to the multiplexing of hardware resources (e.g., antennas) and the same environment, signals enabled for communication and sensing may experience some shared propagation scatterers. This practical sharing feature necessities the joint generation of communication and sensing channels for realistic modeling, where the shared clusters (contributed by the shared scatterers) should be jointly reconstructed for both channels. In this paper, we first conduct communication and sensing channel measurements for an indoor scenario at 28 GHz. The power-angular-delay profiles (PADPs) of multipath components (MPCs) are obtained, and the shared scatterers by communication and sensing channels are intuitively observed. Then, a stochastic JCAS channel model is proposed to capture the sharing feature, where shared and non-shared clusters by the two channels are defined and superimposed. To extract those clusters from measured JCAS channels, a KPowerMeans-based joint clustering algorithm (KPM-JCA) is novelly introduced. Finally, stochastic channel characteristics are analyzed, and the practicality and controllability of the proposed model are validated based on the measurements and empirical simulations. The proposed model can realistically capture the sharing feature of JCAS channels, which is valuable for the design and deployment of JCAS systems.

* 11 pages, 8 figures 
Viaarxiv icon

Learning to Evaluate Performance of Multi-modal Semantic Localization

Sep 19, 2022
Zhiqiang Yuan, Wenkai Zhang, Chongyang Li, Zhaoying Pan, Yongqiang Mao, Jialiang Chen, Shouke Li, Hongqi Wang, Xian Sun

Figure 1 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Figure 2 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Figure 3 for Learning to Evaluate Performance of Multi-modal Semantic Localization
Figure 4 for Learning to Evaluate Performance of Multi-modal Semantic Localization

Semantic localization (SeLo) refers to the task of obtaining the most relevant locations in large-scale remote sensing (RS) images using semantic information such as text. As an emerging task based on cross-modal retrieval, SeLo achieves semantic-level retrieval with only caption-level annotation, which demonstrates its great potential in unifying downstream tasks. Although SeLo has been carried out successively, but there is currently no work has systematically explores and analyzes this urgent direction. In this paper, we thoroughly study this field and provide a complete benchmark in terms of metrics and testdata to advance the SeLo task. Firstly, based on the characteristics of this task, we propose multiple discriminative evaluation metrics to quantify the performance of the SeLo task. The devised significant area proportion, attention shift distance, and discrete attention distance are utilized to evaluate the generated SeLo map from pixel-level and region-level. Next, to provide standard evaluation data for the SeLo task, we contribute a diverse, multi-semantic, multi-objective Semantic Localization Testset (AIR-SLT). AIR-SLT consists of 22 large-scale RS images and 59 test cases with different semantics, which aims to provide a comprehensive evaluations for retrieval models. Finally, we analyze the SeLo performance of RS cross-modal retrieval models in detail, explore the impact of different variables on this task, and provide a complete benchmark for the SeLo task. We have also established a new paradigm for RS referring expression comprehension, and demonstrated the great advantage of SeLo in semantics through combining it with tasks such as detection and road extraction. The proposed evaluation metrics, semantic localization testsets, and corresponding scripts have been open to access at github.com/xiaoyuan1996/SemanticLocalizationMetrics .

* 19 pages, 11 figures 
Viaarxiv icon

Bidirectional Feature Globalization for Few-shot Semantic Segmentation of 3D Point Cloud Scenes

Aug 17, 2022
Yongqiang Mao, Zonghao Guo, Xiaonan Lu, Zhiqiang Yuan, Haowen Guo

Figure 1 for Bidirectional Feature Globalization for Few-shot Semantic Segmentation of 3D Point Cloud Scenes
Figure 2 for Bidirectional Feature Globalization for Few-shot Semantic Segmentation of 3D Point Cloud Scenes
Figure 3 for Bidirectional Feature Globalization for Few-shot Semantic Segmentation of 3D Point Cloud Scenes
Figure 4 for Bidirectional Feature Globalization for Few-shot Semantic Segmentation of 3D Point Cloud Scenes

Few-shot segmentation of point cloud remains a challenging task, as there is no effective way to convert local point cloud information to global representation, which hinders the generalization ability of point features. In this study, we propose a bidirectional feature globalization (BFG) approach, which leverages the similarity measurement between point features and prototype vectors to embed global perception to local point features in a bidirectional fashion. With point-to-prototype globalization (Po2PrG), BFG aggregates local point features to prototypes according to similarity weights from dense point features to sparse prototypes. With prototype-to-point globalization (Pr2PoG), the global perception is embedded to local point features based on similarity weights from sparse prototypes to dense point features. The sparse prototypes of each class embedded with global perception are summarized to a single prototype for few-shot 3D segmentation based on the metric learning framework. Extensive experiments on S3DIS and ScanNet demonstrate that BFG significantly outperforms the state-of-the-art methods.

* Institutional error 
Viaarxiv icon

Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Apr 21, 2022
Zhiqiang Yuan, Wenkai Zhang, Kun Fu, Xuan Li, Chubo Deng, Hongqi Wang, Xian Sun

Figure 1 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Figure 2 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Figure 3 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval
Figure 4 for Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Remote sensing (RS) cross-modal text-image retrieval has attracted extensive attention for its advantages of flexible input and efficient query. However, traditional methods ignore the characteristics of multi-scale and redundant targets in RS image, leading to the degradation of retrieval accuracy. To cope with the problem of multi-scale scarcity and target redundancy in RS multimodal retrieval task, we come up with a novel asymmetric multimodal feature matching network (AMFMN). Our model adapts to multi-scale feature inputs, favors multi-source retrieval methods, and can dynamically filter redundant features. AMFMN employs the multi-scale visual self-attention (MVSA) module to extract the salient features of RS image and utilizes visual features to guide the text representation. Furthermore, to alleviate the positive samples ambiguity caused by the strong intraclass similarity in RS image, we propose a triplet loss function with dynamic variable margin based on prior similarity of sample pairs. Finally, unlike the traditional RS image-text dataset with coarse text and higher intraclass similarity, we construct a fine-grained and more challenging Remote sensing Image-Text Match dataset (RSITMD), which supports RS image retrieval through keywords and sentence separately and jointly. Experiments on four RS text-image datasets demonstrate that the proposed model can achieve state-of-the-art performance in cross-modal RS text-image retrieval task.

* in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-19, 2022, Art no. 4404119  
Viaarxiv icon

Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information

Apr 21, 2022
Zhiqiang Yuan, Wenkai Zhang, Changyuan Tian, Xuee Rong, Zhengyuan Zhang, Hongqi Wang, Kun Fu, Xian Sun

Figure 1 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Figure 2 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Figure 3 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
Figure 4 for Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information

Cross-modal remote sensing text-image retrieval (RSCTIR) has recently become an urgent research hotspot due to its ability of enabling fast and flexible information extraction on remote sensing (RS) images. However, current RSCTIR methods mainly focus on global features of RS images, which leads to the neglect of local features that reflect target relationships and saliency. In this article, we first propose a novel RSCTIR framework based on global and local information (GaLR), and design a multi-level information dynamic fusion (MIDF) module to efficaciously integrate features of different levels. MIDF leverages local information to correct global information, utilizes global information to supplement local information, and uses the dynamic addition of the two to generate prominent visual representation. To alleviate the pressure of the redundant targets on the graph convolution network (GCN) and to improve the model s attention on salient instances during modeling local features, the de-noised representation matrix and the enhanced adjacency matrix (DREA) are devised to assist GCN in producing superior local representations. DREA not only filters out redundant features with high similarity, but also obtains more powerful local features by enhancing the features of prominent objects. Finally, to make full use of the information in the similarity matrix during inference, we come up with a plug-and-play multivariate rerank (MR) algorithm. The algorithm utilizes the k nearest neighbors of the retrieval results to perform a reverse search, and improves the performance by combining multiple components of bidirectional retrieval. Extensive experiments on public datasets strongly demonstrate the state-of-the-art performance of GaLR methods on the RSCTIR task. The code of GaLR method, MR algorithm, and corresponding files have been made available at https://github.com/xiaoyuan1996/GaLR .

* in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-16, 2022, Art no. 5620616  
Viaarxiv icon

Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation

Aug 20, 2021
Yuanyi Zhong, Bodi Yuan, Hong Wu, Zhiqiang Yuan, Jian Peng, Yu-Xiong Wang

Figure 1 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Figure 2 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Figure 3 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Figure 4 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation

We present a novel semi-supervised semantic segmentation method which jointly achieves two desiderata of segmentation model regularities: the label-space consistency property between image augmentations and the feature-space contrastive property among different pixels. We leverage the pixel-level L2 loss and the pixel contrastive loss for the two purposes respectively. To address the computational efficiency issue and the false negative noise issue involved in the pixel contrastive loss, we further introduce and investigate several negative sampling techniques. Extensive experiments demonstrate the state-of-the-art performance of our method (PC2Seg) with the DeepLab-v3+ architecture, in several challenging semi-supervised settings derived from the VOC, Cityscapes, and COCO datasets.

* To appear in ICCV 2021 
Viaarxiv icon