Alert button
Picture for Xiaolin Li

Xiaolin Li

Alert button

BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation

Nov 29, 2022
Zhen Wang, Zheng Feng, Yanjun Li, Bowen Li, Yongrui Wang, Chulin Sha, Min He, Xiaolin Li

Figure 1 for BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation
Figure 2 for BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation
Figure 3 for BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation
Figure 4 for BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular Representation

Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.

* 11 pages, 3 figures 
Viaarxiv icon

Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction

May 28, 2022
Dongjie Wang, Yanjie Fu, Kunpeng Liu, Xiaolin Li, Yan Solihin

Figure 1 for Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction
Figure 2 for Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction
Figure 3 for Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction
Figure 4 for Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction

Representation (feature) space is an environment where data points are vectorized, distances are computed, patterns are characterized, and geometric structures are embedded. Extracting a good representation space is critical to address the curse of dimensionality, improve model generalization, overcome data sparsity, and increase the availability of classic models. Existing literature, such as feature engineering and representation learning, is limited in achieving full automation (e.g., over heavy reliance on intensive labor and empirical experiences), explainable explicitness (e.g., traceable reconstruction process and explainable new features), and flexible optimal (e.g., optimal feature space reconstruction is not embedded into downstream tasks). Can we simultaneously address the automation, explicitness, and optimal challenges in representation space reconstruction for a machine learning task? To answer this question, we propose a group-wise reinforcement generation perspective. We reformulate representation space reconstruction into an interactive process of nested feature generation and selection, where feature generation is to generate new meaningful and explicit features, and feature selection is to eliminate redundant features to control feature sizes. We develop a cascading reinforcement learning method that leverages three cascading Markov Decision Processes to learn optimal generation policies to automate the selection of features and operations and the feature crossing. We design a group-wise generation strategy to cross a feature group, an operation, and another feature group to generate new features and find the strategy that can enhance exploration efficiency and augment reward signals of cascading agents. Finally, we present extensive experiments to demonstrate the effectiveness, efficiency, traceability, and explicitness of our system.

* KDD 2022 
Viaarxiv icon

Semi-supervised Drifted Stream Learning with Short Lookback

May 25, 2022
Weijieying Ren, Pengyang Wang, Xiaolin Li, Charles E. Hughes, Yanjie Fu

Figure 1 for Semi-supervised Drifted Stream Learning with Short Lookback
Figure 2 for Semi-supervised Drifted Stream Learning with Short Lookback
Figure 3 for Semi-supervised Drifted Stream Learning with Short Lookback
Figure 4 for Semi-supervised Drifted Stream Learning with Short Lookback

In many scenarios, 1) data streams are generated in real time; 2) labeled data are expensive and only limited labels are available in the beginning; 3) real-world data is not always i.i.d. and data drift over time gradually; 4) the storage of historical streams is limited and model updating can only be achieved based on a very short lookback window. This learning setting limits the applicability and availability of many Machine Learning (ML) algorithms. We generalize the learning task under such setting as a semi-supervised drifted stream learning with short lookback problem (SDSL). SDSL imposes two under-addressed challenges on existing methods in semi-supervised learning, continuous learning, and domain adaptation: 1) robust pseudo-labeling under gradual shifts and 2) anti-forgetting adaptation with short lookback. To tackle these challenges, we propose a principled and generic generation-replay framework to solve SDSL. The framework is able to accomplish: 1) robust pseudo-labeling in the generation step; 2) anti-forgetting adaption in the replay step. To achieve robust pseudo-labeling, we develop a novel pseudo-label classification model to leverage supervised knowledge of previously labeled data, unsupervised knowledge of new data, and, structure knowledge of invariant label semantics. To achieve adaptive anti-forgetting model replay, we propose to view the anti-forgetting adaptation task as a flat region search problem. We propose a novel minimax game-based replay objective function to solve the flat region search problem and develop an effective optimization solver. Finally, we present extensive experiments to demonstrate our framework can effectively address the task of anti-forgetting learning in drifted streams with short lookback.

* To appear in KDD 2022 
Viaarxiv icon

Multistage Pruning of CNN Based ECG Classifiers for Edge Devices

Aug 31, 2021
Xiaolin Li, Rajesh Panicker, Barry Cardiff, Deepu John

Figure 1 for Multistage Pruning of CNN Based ECG Classifiers for Edge Devices
Figure 2 for Multistage Pruning of CNN Based ECG Classifiers for Edge Devices
Figure 3 for Multistage Pruning of CNN Based ECG Classifiers for Edge Devices
Figure 4 for Multistage Pruning of CNN Based ECG Classifiers for Edge Devices

Using smart wearable devices to monitor patients electrocardiogram (ECG) for real-time detection of arrhythmias can significantly improve healthcare outcomes. Convolutional neural network (CNN) based deep learning has been used successfully to detect anomalous beats in ECG. However, the computational complexity of existing CNN models prohibits them from being implemented in low-powered edge devices. Usually, such models are complex with lots of model parameters which results in large number of computations, memory, and power usage in edge devices. Network pruning techniques can reduce model complexity at the expense of performance in CNN models. This paper presents a novel multistage pruning technique that reduces CNN model complexity with negligible loss in performance compared to existing pruning techniques. An existing CNN model for ECG classification is used as a baseline reference. At 60% sparsity, the proposed technique achieves 97.7% accuracy and an F1 score of 93.59% for ECG classification tasks. This is an improvement of 3.3% and 9% for accuracy and F1 Score respectively, compared to traditional pruning with fine-tuning approach. Compared to the baseline model, we also achieve a 60.4% decrease in run-time complexity.

* 4 pages 
Viaarxiv icon

Server Averaging for Federated Learning

Mar 22, 2021
George Pu, Yanlin Zhou, Dapeng Wu, Xiaolin Li

Figure 1 for Server Averaging for Federated Learning
Figure 2 for Server Averaging for Federated Learning
Figure 3 for Server Averaging for Federated Learning

Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server. The global model is optimized by training and averaging the model parameters of all local participants. However, the improved privacy of federated learning also introduces challenges including higher computation and communication costs. In particular, federated learning converges slower than centralized training. We propose the server averaging algorithm to accelerate convergence. Sever averaging constructs the shared global model by periodically averaging a set of previous global models. Our experiments indicate that server averaging not only converges faster, to a target accuracy, than federated averaging (FedAvg), but also reduces the computation costs on the client-level through epoch decay.

Viaarxiv icon

Federated Unsupervised Representation Learning

Oct 18, 2020
Fengda Zhang, Kun Kuang, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Yueting Zhuang, Xiaolin Li

Figure 1 for Federated Unsupervised Representation Learning
Figure 2 for Federated Unsupervised Representation Learning
Figure 3 for Federated Unsupervised Representation Learning
Figure 4 for Federated Unsupervised Representation Learning

To leverage enormous unlabeled data on distributed edge devices, we formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision while preserving data privacy. FURL poses two new challenges: (1) data distribution shift (Non-IID distribution) among clients would make local models focus on different categories, leading to the inconsistency of representation spaces. (2) without the unified information among clients in FURL, the representations across clients would be misaligned. To address these challenges, we propose Federated Constrastive Averaging with dictionary and alignment (FedCA) algorithm. FedCA is composed of two key modules: (1) dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and (2) alignment module to align the representation of each client on a base model trained on a public data. We adopt the contrastive loss for local model training. Through extensive experiments with three evaluation protocols in IID and Non-IID settings, we demonstrate that FedCA outperforms all baselines with significant margins.

Viaarxiv icon

ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

Sep 21, 2020
Xiaoyong Yuan, Lei Ding, Lan Zhang, Xiaolin Li, Dapeng Wu

Figure 1 for ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles
Figure 2 for ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles
Figure 3 for ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles
Figure 4 for ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

Deep neural networks (DNNs) have become the essential components for various commercialized machine learning services, such as Machine Learning as a Service (MLaaS). Recent studies show that machine learning services face severe privacy threats - well-trained DNNs owned by MLaaS providers can be stolen through public APIs, namely model stealing attacks. However, most existing works undervalued the impact of such attacks, where a successful attack has to acquire confidential training data or auxiliary data regarding the victim DNN. In this paper, we propose ES Attack, a novel model stealing attack without any data hurdles. By using heuristically generated synthetic data, ES Attackiteratively trains a substitute model and eventually achieves a functionally equivalent copy of the victim DNN. The experimental results reveal the severity of ES Attack: i) ES Attack successfully steals the victim model without data hurdles, and ES Attack even outperforms most existing model stealing attacks using auxiliary data in terms of model accuracy; ii) most countermeasures are ineffective in defending ES Attack; iii) ES Attack facilitates further attacks relying on the stolen model.

Viaarxiv icon

Distilled One-Shot Federated Learning

Sep 17, 2020
Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, Dapeng Wu

Figure 1 for Distilled One-Shot Federated Learning
Figure 2 for Distilled One-Shot Federated Learning
Figure 3 for Distilled One-Shot Federated Learning
Figure 4 for Distilled One-Shot Federated Learning

Current federated learning algorithms take tens of communication rounds transmitting unwieldy model weights under ideal circumstances and hundreds when data is poorly distributed. Inspired by recent work on dataset distillation and distributed one-shot learning, we propose Distilled One-Shot Federated Learning, which reduces the number of communication rounds required to train a performant model to only one. Each client distills their private dataset and sends the synthetic data (e.g. images or sentences) to the server. The distilled data look like noise and become useless after model fitting. We empirically show that, in only one round of communication, our method can achieve 96% test accuracy on federated MNIST with LeNet (centralized 99%), 81% on federated IMDB with a customized CNN (centralized 86%), and 84% on federated TREC-6 with a Bi-LSTM (centralized 89%). Using only a few rounds, DOSFL can match the centralized baseline on all three tasks. By evading the need for model-wise updates (i.e., weights, gradients, loss, etc.), the total communication cost of DOSFL is reduced by over an order of magnitude. We believe that DOSFL represents a new direction orthogonal to previous work, towards weight-less and gradient-less federated learning.

Viaarxiv icon

Asking Complex Questions with Multi-hop Answer-focused Reasoning

Sep 16, 2020
Xiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, Dapeng Wu

Figure 1 for Asking Complex Questions with Multi-hop Answer-focused Reasoning
Figure 2 for Asking Complex Questions with Multi-hop Answer-focused Reasoning
Figure 3 for Asking Complex Questions with Multi-hop Answer-focused Reasoning
Figure 4 for Asking Complex Questions with Multi-hop Answer-focused Reasoning

Asking questions from natural language text has attracted increasing attention recently, and several schemes have been proposed with promising results by asking the right question words and copy relevant words from the input to the question. However, most state-of-the-art methods focus on asking simple questions involving single-hop relations. In this paper, we propose a new task called multihop question generation that asks complex and semantically relevant questions by additionally discovering and modeling the multiple entities and their semantic relations given a collection of documents and the corresponding answer 1. To solve the problem, we propose multi-hop answer-focused reasoning on the grounded answer-centric entity graph to include different granularity levels of semantic information including the word-level and document-level semantics of the entities and their semantic relations. Through extensive experiments on the HOTPOTQA dataset, we demonstrate the superiority and effectiveness of our proposed model that serves as a baseline to motivate future work.

Viaarxiv icon

Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks

Sep 07, 2020
Xiaoyong Yuan, Lei Ding, Malek Ben Salem, Xiaolin Li, Dapeng Wu

Figure 1 for Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks
Figure 2 for Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks
Figure 3 for Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks
Figure 4 for Connecting Web Event Forecasting with Anomaly Detection: A Case Study on Enterprise Web Applications Using Self-Supervised Neural Networks

Recently web applications have been widely used in enterprises to assist employees in providing effective and efficient business processes. Forecasting upcoming web events in enterprise web applications can be beneficial in many ways, such as efficient caching and recommendation. In this paper, we present a web event forecasting approach, DeepEvent, in enterprise web applications for better anomaly detection. DeepEvent includes three key features: web-specific neural networks to take into account the characteristics of sequential web events, self-supervised learning techniques to overcome the scarcity of labeled data, and sequence embedding techniques to integrate contextual events and capture dependencies among web events. We evaluate DeepEvent on web events collected from six real-world enterprise web applications. Our experimental results demonstrate that DeepEvent is effective in forecasting sequential web events and detecting web based anomalies. DeepEvent provides a context-based system for researchers and practitioners to better forecast web events with situational awareness.

* accepted at EAI SecureComm 2020 
Viaarxiv icon