Alert button
Picture for Fei Ye

Fei Ye

Alert button

Learning Harmonic Molecular Representations on Riemannian Manifold

Mar 27, 2023
Yiqun Wang, Yuning Shen, Shi Chen, Lihao Wang, Fei Ye, Hao Zhou

Figure 1 for Learning Harmonic Molecular Representations on Riemannian Manifold
Figure 2 for Learning Harmonic Molecular Representations on Riemannian Manifold
Figure 3 for Learning Harmonic Molecular Representations on Riemannian Manifold
Figure 4 for Learning Harmonic Molecular Representations on Riemannian Manifold

Molecular representation learning plays a crucial role in AI-assisted drug discovery research. Encoding 3D molecular structures through Euclidean neural networks has become the prevailing method in the geometric deep learning community. However, the equivariance constraints and message passing in Euclidean space may limit the network expressive power. In this work, we propose a Harmonic Molecular Representation learning (HMR) framework, which represents a molecule using the Laplace-Beltrami eigenfunctions of its molecular surface. HMR offers a multi-resolution representation of molecular geometric and chemical features on 2D Riemannian manifold. We also introduce a harmonic message passing method to realize efficient spectral message passing over the surface manifold for better molecular encoding. Our proposed method shows comparable predictive power to current models in small molecule property prediction, and outperforms the state-of-the-art deep learning models for ligand-binding protein pocket classification and the rigid protein docking challenge, demonstrating its versatility in molecular representation learning.

* 25 pages including Appendix 
Viaarxiv icon

On Pre-trained Language Models for Antibody

Jan 28, 2023
Danqing Wang, Fei Ye, Hao Zhou

Figure 1 for On Pre-trained Language Models for Antibody
Figure 2 for On Pre-trained Language Models for Antibody
Figure 3 for On Pre-trained Language Models for Antibody
Figure 4 for On Pre-trained Language Models for Antibody

Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks. However, few studies comprehensively explore the representation capability of distinct pre-trained language models on different antibody problems. Here, to investigate the problem, we aim to answer the following key questions: (1) How do pre-trained language models perform in antibody tasks with different specificity? (2) How many benefits will the model gain if we introduce the specific biological mechanism to the pre-training process? (3) Do the learned antibody pre-trained representations make sense in real-world antibody problems, like drug discovery and immune process understanding? Previously, no benchmark available largely hindered the study to answer these questions. To facilitate the investigation, we provide an AnTibody Understanding Evaluation (ATUE) benchmark. We comprehensively evaluate the performance of protein pre-trained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at https://github.com/dqwang122/EATLM.

* Accepted in ICLR 2023 
Viaarxiv icon

Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model

Nov 28, 2022
Danqing Wang, Zeyu Wen, Fei Ye, Hao Zhou, Lei Li

Figure 1 for Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model
Figure 2 for Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model
Figure 3 for Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model
Figure 4 for Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model

Antimicrobial peptide (AMP) is a promising therapy in the treatment of broad-spectrum antibiotics and drug-resistant infections. Recently, an increasing number of researchers have been introducing deep generative models to accelerate AMP discovery. However, current studies mainly focus on sequence attributes and ignore structure information, which is important in AMP biological functions. In this paper, we propose a latent sequence-structure model for AMPs (LSSAMP) with multi-scale VQ-VAE to incorporate secondary structures. By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures. Experimental results show that the peptides generated by LSSAMP have a high probability of AMP, and two of the 21 candidates have been verified to have good antimicrobial activity. Our model will be released to help create high-quality AMP candidates for follow-up biological experiments and accelerate the whole AMP discovery.

Viaarxiv icon

Task-Free Continual Learning via Online Discrepancy Distance Learning

Oct 12, 2022
Fei Ye, Adrian G. Bors

Figure 1 for Task-Free Continual Learning via Online Discrepancy Distance Learning
Figure 2 for Task-Free Continual Learning via Online Discrepancy Distance Learning
Figure 3 for Task-Free Continual Learning via Online Discrepancy Distance Learning
Figure 4 for Task-Free Continual Learning via Online Discrepancy Distance Learning

Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information. Although recently some methods have been proposed for TFCL, they lack theoretical guarantees. Moreover, forgetting analysis during TFCL was not studied theoretically before. This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis gives new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled by the dynamic component expansion mechanism for a mixture model, namely the Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the probabilistic representation of the current memory buffer and the already accumulated knowledge and uses it as the expansion signal to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the most relevant samples into the memory buffer through the discrepancy-based measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.

* Accepted at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022) 
Viaarxiv icon

Continual Variational Autoencoder Learning via Online Cooperative Memorization

Jul 20, 2022
Fei Ye, Adrian G. Bors

Figure 1 for Continual Variational Autoencoder Learning via Online Cooperative Memorization
Figure 2 for Continual Variational Autoencoder Learning via Online Cooperative Memorization
Figure 3 for Continual Variational Autoencoder Learning via Online Cooperative Memorization
Figure 4 for Continual Variational Autoencoder Learning via Online Cooperative Memorization

Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks. However, their ability to generate images with specifications corresponding to the classes and databases learned during Continual Learning (CL) is not well understood and catastrophic forgetting remains a significant challenge. In this paper, we firstly analyze the forgetting behaviour of VAEs by developing a new theoretical framework that formulates CL as a dynamic optimal transport problem. This framework proves approximate bounds to the data likelihood without requiring the task information and explains how the prior knowledge is lost during the training process. We then propose a novel memory buffering approach, namely the Online Cooperative Memorization (OCM) framework, which consists of a Short-Term Memory (STM) that continually stores recent samples to provide future information for the model, and a Long-Term Memory (LTM) aiming to preserve a wide diversity of samples. The proposed OCM transfers certain samples from STM to LTM according to the information diversity selection criterion without requiring any supervised signals. The OCM framework is then combined with a dynamic VAE expansion mixture network for further enhancing its performance.

* Accepted by European Conference on Computer Vision 2022 (ECCV 2022) 
Viaarxiv icon

Learning an evolved mixture model for task-free continual learning

Jul 11, 2022
Fei Ye, Adrian G. Bors

Figure 1 for Learning an evolved mixture model for task-free continual learning
Figure 2 for Learning an evolved mixture model for task-free continual learning
Figure 3 for Learning an evolved mixture model for task-free continual learning
Figure 4 for Learning an evolved mixture model for task-free continual learning

Recently, continual learning (CL) has gained significant interest because it enables deep learning models to acquire new knowledge without forgetting previously learnt information. However, most existing works require knowing the task identities and boundaries, which is not realistic in a real context. In this paper, we address a more challenging and realistic setting in CL, namely the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information. To address TFCL, we introduce an evolved mixture model whose network architecture is dynamically expanded to adapt to the data distribution shift. We implement this expansion mechanism by evaluating the probability distance between the knowledge stored in each mixture model component and the current memory buffer using the Hilbert Schmidt Independence Criterion (HSIC). We further introduce two simple dropout mechanisms to selectively remove stored examples in order to avoid memory overload while preserving memory diversity. Empirical results demonstrate that the proposed approach achieves excellent performance.

* Accepted by the 29th IEEE International Conference on Image Processing (ICIP 2022) 
Viaarxiv icon

Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model

Mar 25, 2022
Fei Ye, Adrian G. Bors

Figure 1 for Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model
Figure 2 for Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model
Figure 3 for Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model
Figure 4 for Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model

In this article, we provide the appendix for Lifelong Generative Modelling Using Dynamic Expansion Graph Model. This appendix includes additional visual results as well as the numerical results on the challenging datasets. In addition, we also provide detailed proofs for the proposed theoretical analysis framework. The source code can be found in https://github.com/dtuzi123/Expansion-Graph-Model.

Viaarxiv icon

Lifelong Generative Modelling Using Dynamic Expansion Graph Model

Dec 15, 2021
Fei Ye, Adrian G. Bors

Figure 1 for Lifelong Generative Modelling Using Dynamic Expansion Graph Model
Figure 2 for Lifelong Generative Modelling Using Dynamic Expansion Graph Model
Figure 3 for Lifelong Generative Modelling Using Dynamic Expansion Graph Model
Figure 4 for Lifelong Generative Modelling Using Dynamic Expansion Graph Model

Variational Autoencoders (VAEs) suffer from degenerated performance, when learning several successive tasks. This is caused by catastrophic forgetting. In order to address the knowledge loss, VAEs are using either Generative Replay (GR) mechanisms or Expanding Network Architectures (ENA). In this paper we study the forgetting behaviour of VAEs using a joint GR and ENA methodology, by deriving an upper bound on the negative marginal log-likelihood. This theoretical analysis provides new insights into how VAEs forget the previously learnt knowledge during lifelong learning. The analysis indicates the best performance achieved when considering model mixtures, under the ENA framework, where there are no restrictions on the number of components. However, an ENA-based approach may require an excessive number of parameters. This motivates us to propose a novel Dynamic Expansion Graph Model (DEGM). DEGM expands its architecture, according to the novelty associated with each new databases, when compared to the information already learnt by the network from previous tasks. DEGM training optimizes knowledge structuring, characterizing the joint probabilistic representations corresponding to the past and more recently learned tasks. We demonstrate that DEGM guarantees optimal performance for each task while also minimizing the required number of parameters. Supplementary materials (SM) and source code are available in https://github.com/dtuzi123/Expansion-Graph-Model.

* Accepted in Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022) 
Viaarxiv icon

Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process

Aug 25, 2021
Fei Ye, Adrian G. Bors

Figure 1 for Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process
Figure 2 for Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process
Figure 3 for Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process
Figure 4 for Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process

Recent research efforts in lifelong learning propose to grow a mixture of models to adapt to an increasing number of tasks. The proposed methodology shows promising results in overcoming catastrophic forgetting. However, the theory behind these successful models is still not well understood. In this paper, we perform the theoretical analysis for lifelong learning models by deriving the risk bounds based on the discrepancy distance between the probabilistic representation of data generated by the model and that corresponding to the target dataset. Inspired by the theoretical analysis, we introduce a new lifelong learning approach, namely the Lifelong Infinite Mixture (LIMix) model, which can automatically expand its network architectures or choose an appropriate component to adapt its parameters for learning a new task, while preserving its previously learnt information. We propose to incorporate the knowledge by means of Dirichlet processes by using a gating mechanism which computes the dependence between the knowledge learnt previously and stored in each component, and a new set of data. Besides, we train a compact Student model which can accumulate cross-domain representations over time and make quick inferences. The code is available at https://github.com/dtuzi123/Lifelong-infinite-mixture-model.

* Accepted by International Conference on Computer Vision (ICCV 2021) 
Viaarxiv icon

Lifelong Twin Generative Adversarial Networks

Jul 09, 2021
Fei Ye, Adrian G. Bors

Figure 1 for Lifelong Twin Generative Adversarial Networks
Figure 2 for Lifelong Twin Generative Adversarial Networks
Figure 3 for Lifelong Twin Generative Adversarial Networks
Figure 4 for Lifelong Twin Generative Adversarial Networks

In this paper, we propose a new continuously learning generative model, called the Lifelong Twin Generative Adversarial Networks (LT-GANs). LT-GANs learns a sequence of tasks from several databases and its architecture consists of three components: two identical generators, namely the Teacher and Assistant, and one Discriminator. In order to allow for the LT-GANs to learn new concepts without forgetting, we introduce a new lifelong training approach, namely Lifelong Adversarial Knowledge Distillation (LAKD), which encourages the Teacher and Assistant to alternately teach each other, while learning a new database. This training approach favours transferring knowledge from a more knowledgeable player to another player which knows less information about a previously given task.

* Accepted at International Conference on Image Processing (ICIP 2021) 
Viaarxiv icon