Alert button
Picture for Lei Guo

Lei Guo

Alert button

Asymptotically Efficient Online Learning for Censored Regression Models Under Non-I.I.D Data

Sep 18, 2023
Lantian Zhang, Lei Guo

The asymptotically efficient online learning problem is investigated for stochastic censored regression models, which arise from various fields of learning and statistics but up to now still lacks comprehensive theoretical studies on the efficiency of the learning algorithms. For this, we propose a two-step online algorithm, where the first step focuses on achieving algorithm convergence, and the second step is dedicated to improving the estimation performance. Under a general excitation condition on the data, we show that our algorithm is strongly consistent and asymptotically normal by employing the stochastic Lyapunov function method and limit theories for martingales. Moreover, we show that the covariances of the estimates can achieve the Cramer-Rao (C-R) bound asymptotically, indicating that the performance of the proposed algorithm is the best possible that one can expect in general. Unlike most of the existing works, our results are obtained without resorting to the traditionally used but stringent conditions such as independent and identically distributed (i.i.d) assumption on the data, and thus our results do not exclude applications to stochastic dynamical systems with feedback. A numerical example is also provided to illustrate the superiority of the proposed online algorithm over the existing related ones in the literature.

* 35 pages 
Viaarxiv icon

Joint Device-Edge Digital Semantic Communication with Adaptive Network Split and Learned Non-Linear Quantization

May 22, 2023
Lei Guo, Wei Chen, Yuxuan Sun, Bo Ai

Figure 1 for Joint Device-Edge Digital Semantic Communication with Adaptive Network Split and Learned Non-Linear Quantization
Figure 2 for Joint Device-Edge Digital Semantic Communication with Adaptive Network Split and Learned Non-Linear Quantization
Figure 3 for Joint Device-Edge Digital Semantic Communication with Adaptive Network Split and Learned Non-Linear Quantization
Figure 4 for Joint Device-Edge Digital Semantic Communication with Adaptive Network Split and Learned Non-Linear Quantization

Semantic communication, an intelligent communication paradigm that aims to transmit useful information in the semantic domain, is facilitated by deep learning techniques. Although robust semantic features can be learned and transmitted in an analog fashion, it poses new challenges to hardware, protocol, and encryption. In this paper, we propose a digital semantic communication system, which consists of an encoding network deployed on a resource-limited device and a decoding network deployed at the edge. To acquire better semantic representation for digital transmission, a novel non-linear quantization module is proposed with the trainable quantization levels that efficiently quantifies semantic features. Additionally, structured pruning by a sparse scaling vector is incorporated to reduce the dimension of the transmitted features. We also introduce a semantic learning loss (SLL) function to reduce semantic error. To adapt to various channel conditions and inputs under constraints of communication and computing resources, a policy network is designed to adaptively choose the split point and the dimension of the transmitted semantic features. Experiments using the CIFAR-10 dataset for image classification are employed to evaluate the proposed digital semantic communication network, and ablation studies are conducted to assess the proposed modules including the quantization module, structured pruning and SLL.

Viaarxiv icon

Lightweight Self-Knowledge Distillation with Multi-source Information Fusion

May 16, 2023
Xucong Wang, Pengchao Han, Lei Guo

Figure 1 for Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
Figure 2 for Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
Figure 3 for Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
Figure 4 for Lightweight Self-Knowledge Distillation with Multi-source Information Fusion

Knowledge Distillation (KD) is a powerful technique for transferring knowledge between neural network models, where a pre-trained teacher model is used to facilitate the training of the target student model. However, the availability of a suitable teacher model is not always guaranteed. To address this challenge, Self-Knowledge Distillation (SKD) attempts to construct a teacher model from itself. Existing SKD methods add Auxiliary Classifiers (AC) to intermediate layers of the model or use the history models and models with different input data within the same class. However, these methods are computationally expensive and only capture time-wise and class-wise features of data. In this paper, we propose a lightweight SKD framework that utilizes multi-source information to construct a more informative teacher. Specifically, we introduce a Distillation with Reverse Guidance (DRG) method that considers different levels of information extracted by the model, including edge, shape, and detail of the input data, to construct a more informative teacher. Additionally, we design a Distillation with Shape-wise Regularization (DSR) method that ensures a consistent shape of ranked model output for all data. We validate the performance of the proposed DRG, DSR, and their combination through comprehensive experiments on various datasets and models. Our results demonstrate the superiority of the proposed methods over baselines (up to 2.87%) and state-of-the-art SKD methods (up to 1.15%), while being computationally efficient and robust. The code is available at https://github.com/xucong-parsifal/LightSKD.

* Submitted to IEEE TNNLS 
Viaarxiv icon

ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT

May 03, 2023
Chong Ma, Zihao Wu, Jiaqi Wang, Shaochen Xu, Yaonai Wei, Zhengliang Liu, Xi Jiang, Lei Guo, Xiaoyan Cai, Shu Zhang, Tuo Zhang, Dajiang Zhu, Dinggang Shen, Tianming Liu, Xiang Li

Figure 1 for ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT
Figure 2 for ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT
Figure 3 for ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT
Figure 4 for ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT

The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians, and it is typically written by radiologists based on the 'Findings' section. However, writing numerous impressions can be laborious and error-prone for radiologists. Although recent studies have achieved promising results in automatic impression generation using large-scale medical text data for pre-training and fine-tuning pre-trained language models, such models often require substantial amounts of medical text data and have poor generalization performance. While large language models (LLMs) like ChatGPT have shown strong generalization capabilities and performance, their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, which leverages the in-context learning capability of LLMs by constructing dynamic contexts using domain-specific, individualized data. This dynamic prompt approach enables the model to learn contextual knowledge from semantically similar examples from existing data. Additionally, we design an iterative optimization algorithm that performs automatic evaluation on the generated impression results and composes the corresponding instruction prompts to further optimize the model. The proposed ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and OpenI datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.

Viaarxiv icon

Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation

Apr 09, 2023
Lei Guo, Chunxiao Wang, Xinhua Wang, Lei Zhu, Hongzhi Yin

Figure 1 for Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation
Figure 2 for Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation
Figure 3 for Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation
Figure 4 for Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation

Cross-domain Recommendation (CR) has been extensively studied in recent years to alleviate the data sparsity issue in recommender systems by utilizing different domain information. In this work, we focus on the more general Non-overlapping Cross-domain Sequential Recommendation (NCSR) scenario. NCSR is challenging because there are no overlapped entities (e.g., users and items) between domains, and there is only users' implicit feedback and no content information. Previous CR methods cannot solve NCSR well, since (1) they either need extra content to align domains or need explicit domain alignment constraints to reduce the domain discrepancy from domain-invariant features, (2) they pay more attention to users' explicit feedback (i.e., users' rating data) and cannot well capture their sequential interaction patterns, (3) they usually do a single-target cross-domain recommendation task and seldom investigate the dual-target ones. Considering the above challenges, we propose Prompt Learning-based Cross-domain Recommender (PLCR), an automated prompting-based recommendation framework for the NCSR task. Specifically, to address the challenge (1), PLCR resorts to learning domain-invariant and domain-specific representations via its prompt learning component, where the domain alignment constraint is discarded. For challenges (2) and (3), PLCR introduces a pre-trained sequence encoder to learn users' sequential interaction patterns, and conducts a dual-learning target with a separation constraint to enhance recommendations in both domains. Our empirical study on two sub-collections of Amazon demonstrates the advance of PLCR compared with some related SOTA methods.

Viaarxiv icon

Sector Bounds for Vertical Cable Force Error in Cable-Suspended Load Transportation System

Apr 01, 2023
Lidan Xu, Hao Lu, JianLiang Wang, Xianggui Guo, Lei Guo

Figure 1 for Sector Bounds for Vertical Cable Force Error in Cable-Suspended Load Transportation System
Figure 2 for Sector Bounds for Vertical Cable Force Error in Cable-Suspended Load Transportation System
Figure 3 for Sector Bounds for Vertical Cable Force Error in Cable-Suspended Load Transportation System
Figure 4 for Sector Bounds for Vertical Cable Force Error in Cable-Suspended Load Transportation System

This article studies the collaborative transportation of a cable-suspended pipe by two quadrotors. A force-coordination control scheme is proposed, where a force-consensus term is introduced to average the load distribution between the quadrotors. Since thrust uncertainty and cable force are coupled together in the acceleration channel, disturbance observer can only obtain the lumped disturbance estimate. Under the quasi-static condition, a disturbance separation strategy is developed to remove the thrust uncertainty estimate for precise cable force estimation. The stability of the overall system is analyzed using Lyapunov theory. Both numerical simulations and indoor experiments using heterogeneous quadrotors validate the effectiveness of thrust uncertainty separation and force-consensus algorithm.

Viaarxiv icon

Towards Lightweight Cross-domain Sequential Recommendation via External Attention-enhanced Graph Convolution Network

Feb 07, 2023
Jinyu Zhang, Huichuan Duan, Lei Guo, Liancheng Xu, Xinhua Wang

Figure 1 for Towards Lightweight Cross-domain Sequential Recommendation via External Attention-enhanced Graph Convolution Network
Figure 2 for Towards Lightweight Cross-domain Sequential Recommendation via External Attention-enhanced Graph Convolution Network
Figure 3 for Towards Lightweight Cross-domain Sequential Recommendation via External Attention-enhanced Graph Convolution Network
Figure 4 for Towards Lightweight Cross-domain Sequential Recommendation via External Attention-enhanced Graph Convolution Network

Cross-domain Sequential Recommendation (CSR) is an emerging yet challenging task that depicts the evolution of behavior patterns for overlapped users by modeling their interactions from multiple domains. Existing studies on CSR mainly focus on using composite or in-depth structures that achieve significant improvement in accuracy but bring a huge burden to the model training. Moreover, to learn the user-specific sequence representations, existing works usually adopt the global relevance weighting strategy (e.g., self-attention mechanism), which has quadratic computational complexity. In this work, we introduce a lightweight external attention-enhanced GCN-based framework to solve the above challenges, namely LEA-GCN. Specifically, by only keeping the neighborhood aggregation component and using the Single-Layer Aggregating Protocol (SLAP), our lightweight GCN encoder performs more efficiently to capture the collaborative filtering signals of the items from both domains. To further alleviate the framework structure and aggregate the user-specific sequential pattern, we devise a novel dual-channel External Attention (EA) component, which calculates the correlation among all items via a lightweight linear structure. Extensive experiments are conducted on two real-world datasets, demonstrating that LEA-GCN requires a smaller volume and less training time without affecting the accuracy compared with several state-of-the-art methods.

* 16 pages, 4 figures, conference paper, accepted by DASFAA 2023 
Viaarxiv icon

Deep Forest with Hashing Screening and Window Screening

Jul 25, 2022
Pengfei Ma, Youxi Wu, Yan Li, Lei Guo, He Jiang, Xingquan Zhu, Xindong Wu

Figure 1 for Deep Forest with Hashing Screening and Window Screening
Figure 2 for Deep Forest with Hashing Screening and Window Screening
Figure 3 for Deep Forest with Hashing Screening and Window Screening
Figure 4 for Deep Forest with Hashing Screening and Window Screening

As a novel deep learning model, gcForest has been widely used in various applications. However, the current multi-grained scanning of gcForest produces many redundant feature vectors, and this increases the time cost of the model. To screen out redundant feature vectors, we introduce a hashing screening mechanism for multi-grained scanning and propose a model called HW-Forest which adopts two strategies, hashing screening and window screening. HW-Forest employs perceptual hashing algorithm to calculate the similarity between feature vectors in hashing screening strategy, which is used to remove the redundant feature vectors produced by multi-grained scanning and can significantly decrease the time cost and memory consumption. Furthermore, we adopt a self-adaptive instance screening strategy to improve the performance of our approach, called window screening, which can achieve higher accuracy without hyperparameter tuning on different datasets. Our experimental results show that HW-Forest has higher accuracy than other models, and the time cost is also reduced.

Viaarxiv icon

Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model

Jul 14, 2022
Haoteng Tang, Guixiang Ma, Lei Guo, Xiyao Fu, Heng Huang, Liang Zhang

Figure 1 for Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model
Figure 2 for Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model
Figure 3 for Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model
Figure 4 for Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model

Recently brain networks have been widely adopted to study brain dynamics, brain development and brain diseases. Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases. However, current graph learning techniques have several issues on brain network mining. Firstly, most current graph learning models are designed for unsigned graph, which hinders the analysis of many signed network data (e.g., brain functional networks). Meanwhile, the insufficiency of brain network data limits the model performance on clinical phenotypes predictions. Moreover, few of current graph learning model is interpretable, which may not be capable to provide biological insights for model outcomes. Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks, which can be used for different prediction tasks. In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning. We evaluate this framework on different classification and regression tasks using the data from HCP and OASIS. Our results from extensive experiments demonstrate the superiority of the proposed model compared to several state-of-the-art techniques. Additionally, we use graph saliency maps, derived from these prediction tasks, to demonstrate detection and interpretation of phenotypic biomarkers.

Viaarxiv icon