Picture for Di He

Di He

Do Transformers Really Perform Bad for Graph Representation?

Add code
Jun 17, 2021
Figure 1 for Do Transformers Really Perform Bad for Graph Representation?
Figure 2 for Do Transformers Really Perform Bad for Graph Representation?
Figure 3 for Do Transformers Really Perform Bad for Graph Representation?
Figure 4 for Do Transformers Really Perform Bad for Graph Representation?
Viaarxiv icon

How could Neural Networks understand Programs?

Add code
May 31, 2021
Figure 1 for How could Neural Networks understand Programs?
Figure 2 for How could Neural Networks understand Programs?
Figure 3 for How could Neural Networks understand Programs?
Figure 4 for How could Neural Networks understand Programs?
Viaarxiv icon

Adversarial Training with Rectified Rejection

Add code
May 31, 2021
Figure 1 for Adversarial Training with Rectified Rejection
Figure 2 for Adversarial Training with Rectified Rejection
Figure 3 for Adversarial Training with Rectified Rejection
Figure 4 for Adversarial Training with Rectified Rejection
Viaarxiv icon

Wav2vec-C: A Self-supervised Model for Speech Representation Learning

Add code
Mar 09, 2021
Figure 1 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Figure 2 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Figure 3 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Figure 4 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Viaarxiv icon

Transformers with Competitive Ensembles of Independent Mechanisms

Add code
Feb 27, 2021
Figure 1 for Transformers with Competitive Ensembles of Independent Mechanisms
Figure 2 for Transformers with Competitive Ensembles of Independent Mechanisms
Figure 3 for Transformers with Competitive Ensembles of Independent Mechanisms
Figure 4 for Transformers with Competitive Ensembles of Independent Mechanisms
Viaarxiv icon

LazyFormer: Self Attention with Lazy Update

Add code
Feb 25, 2021
Figure 1 for LazyFormer: Self Attention with Lazy Update
Figure 2 for LazyFormer: Self Attention with Lazy Update
Figure 3 for LazyFormer: Self Attention with Lazy Update
Figure 4 for LazyFormer: Self Attention with Lazy Update
Viaarxiv icon

Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder

Add code
Feb 18, 2021
Figure 1 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Figure 2 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Figure 3 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Figure 4 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Viaarxiv icon

Revisiting Language Encoding in Learning Multilingual Representations

Add code
Feb 16, 2021
Figure 1 for Revisiting Language Encoding in Learning Multilingual Representations
Figure 2 for Revisiting Language Encoding in Learning Multilingual Representations
Figure 3 for Revisiting Language Encoding in Learning Multilingual Representations
Figure 4 for Revisiting Language Encoding in Learning Multilingual Representations
Viaarxiv icon

Towards Certifying $\ell_\infty$ Robustness using Neural Networks with $\ell_\infty$-dist Neurons

Add code
Feb 10, 2021
Figure 1 for Towards Certifying $\ell_\infty$ Robustness using Neural Networks with $\ell_\infty$-dist Neurons
Figure 2 for Towards Certifying $\ell_\infty$ Robustness using Neural Networks with $\ell_\infty$-dist Neurons
Figure 3 for Towards Certifying $\ell_\infty$ Robustness using Neural Networks with $\ell_\infty$-dist Neurons
Figure 4 for Towards Certifying $\ell_\infty$ Robustness using Neural Networks with $\ell_\infty$-dist Neurons
Viaarxiv icon

CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line Transcriptomics

Add code
Jan 31, 2021
Figure 1 for CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line Transcriptomics
Figure 2 for CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line Transcriptomics
Figure 3 for CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line Transcriptomics
Figure 4 for CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line Transcriptomics
Viaarxiv icon