Picture for Jian Peng

Jian Peng

School of Information Engineering, Jiangxi Vocational College of Finance & Economics, Jiujiang, China

A 3D Molecule Generative Model for Structure-Based Drug Design

Add code
Mar 20, 2022
Figure 1 for A 3D Molecule Generative Model for Structure-Based Drug Design
Figure 2 for A 3D Molecule Generative Model for Structure-Based Drug Design
Figure 3 for A 3D Molecule Generative Model for Structure-Based Drug Design
Figure 4 for A 3D Molecule Generative Model for Structure-Based Drug Design
Viaarxiv icon

FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours

Add code
Mar 04, 2022
Figure 1 for FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours
Figure 2 for FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours
Figure 3 for FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours
Figure 4 for FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours
Viaarxiv icon

Directed Weight Neural Networks for Protein Structure Representation Learning

Add code
Jan 28, 2022
Figure 1 for Directed Weight Neural Networks for Protein Structure Representation Learning
Figure 2 for Directed Weight Neural Networks for Protein Structure Representation Learning
Figure 3 for Directed Weight Neural Networks for Protein Structure Representation Learning
Figure 4 for Directed Weight Neural Networks for Protein Structure Representation Learning
Viaarxiv icon

Overcome Anterograde Forgetting with Cycled Memory Networks

Add code
Dec 04, 2021
Figure 1 for Overcome Anterograde Forgetting with Cycled Memory Networks
Figure 2 for Overcome Anterograde Forgetting with Cycled Memory Networks
Figure 3 for Overcome Anterograde Forgetting with Cycled Memory Networks
Figure 4 for Overcome Anterograde Forgetting with Cycled Memory Networks
Viaarxiv icon

Learning Long-Term Reward Redistribution via Randomized Return Decomposition

Add code
Nov 26, 2021
Figure 1 for Learning Long-Term Reward Redistribution via Randomized Return Decomposition
Figure 2 for Learning Long-Term Reward Redistribution via Randomized Return Decomposition
Figure 3 for Learning Long-Term Reward Redistribution via Randomized Return Decomposition
Figure 4 for Learning Long-Term Reward Redistribution via Randomized Return Decomposition
Viaarxiv icon

Reviewing continual learning from the perspective of human-level intelligence

Add code
Nov 23, 2021
Figure 1 for Reviewing continual learning from the perspective of human-level intelligence
Figure 2 for Reviewing continual learning from the perspective of human-level intelligence
Figure 3 for Reviewing continual learning from the perspective of human-level intelligence
Figure 4 for Reviewing continual learning from the perspective of human-level intelligence
Viaarxiv icon

Learning by Active Forgetting for Neural Networks

Add code
Nov 21, 2021
Figure 1 for Learning by Active Forgetting for Neural Networks
Figure 2 for Learning by Active Forgetting for Neural Networks
Figure 3 for Learning by Active Forgetting for Neural Networks
Figure 4 for Learning by Active Forgetting for Neural Networks
Viaarxiv icon

Hindsight Foresight Relabeling for Meta-Reinforcement Learning

Add code
Sep 18, 2021
Figure 1 for Hindsight Foresight Relabeling for Meta-Reinforcement Learning
Figure 2 for Hindsight Foresight Relabeling for Meta-Reinforcement Learning
Figure 3 for Hindsight Foresight Relabeling for Meta-Reinforcement Learning
Figure 4 for Hindsight Foresight Relabeling for Meta-Reinforcement Learning
Viaarxiv icon

Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation

Add code
Aug 20, 2021
Figure 1 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Figure 2 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Figure 3 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Figure 4 for Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation
Viaarxiv icon

Coordinate-wise Control Variates for Deep Policy Gradients

Add code
Aug 11, 2021
Figure 1 for Coordinate-wise Control Variates for Deep Policy Gradients
Figure 2 for Coordinate-wise Control Variates for Deep Policy Gradients
Figure 3 for Coordinate-wise Control Variates for Deep Policy Gradients
Figure 4 for Coordinate-wise Control Variates for Deep Policy Gradients
Viaarxiv icon