Alert button
Picture for Di He

Di He

Alert button

First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track

Add code
Bookmark button
Alert button
Jun 20, 2021
Chengxuan Ying, Mingqi Yang, Shuxin Zheng, Guolin Ke, Shengjie Luo, Tianle Cai, Chenglin Wu, Yuxin Wang, Yanming Shen, Di He

Figure 1 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Figure 2 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Figure 3 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Figure 4 for First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Viaarxiv icon

Do Transformers Really Perform Bad for Graph Representation?

Add code
Bookmark button
Alert button
Jun 17, 2021
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu

Figure 1 for Do Transformers Really Perform Bad for Graph Representation?
Figure 2 for Do Transformers Really Perform Bad for Graph Representation?
Figure 3 for Do Transformers Really Perform Bad for Graph Representation?
Figure 4 for Do Transformers Really Perform Bad for Graph Representation?
Viaarxiv icon

How could Neural Networks understand Programs?

Add code
Bookmark button
Alert button
May 31, 2021
Dinglan Peng, Shuxin Zheng, Yatao Li, Guolin Ke, Di He, Tie-Yan Liu

Figure 1 for How could Neural Networks understand Programs?
Figure 2 for How could Neural Networks understand Programs?
Figure 3 for How could Neural Networks understand Programs?
Figure 4 for How could Neural Networks understand Programs?
Viaarxiv icon

Adversarial Training with Rectified Rejection

Add code
Bookmark button
Alert button
May 31, 2021
Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu

Figure 1 for Adversarial Training with Rectified Rejection
Figure 2 for Adversarial Training with Rectified Rejection
Figure 3 for Adversarial Training with Rectified Rejection
Figure 4 for Adversarial Training with Rectified Rejection
Viaarxiv icon

Wav2vec-C: A Self-supervised Model for Speech Representation Learning

Add code
Bookmark button
Alert button
Mar 09, 2021
Samik Sadhu, Di He, Che-Wei Huang, Sri Harish Mallidi, Minhua Wu, Ariya Rastrow, Andreas Stolcke, Jasha Droppo, Roland Maas

Figure 1 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Figure 2 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Figure 3 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Figure 4 for Wav2vec-C: A Self-supervised Model for Speech Representation Learning
Viaarxiv icon

Transformers with Competitive Ensembles of Independent Mechanisms

Add code
Bookmark button
Alert button
Feb 27, 2021
Alex Lamb, Di He, Anirudh Goyal, Guolin Ke, Chien-Feng Liao, Mirco Ravanelli, Yoshua Bengio

Figure 1 for Transformers with Competitive Ensembles of Independent Mechanisms
Figure 2 for Transformers with Competitive Ensembles of Independent Mechanisms
Figure 3 for Transformers with Competitive Ensembles of Independent Mechanisms
Figure 4 for Transformers with Competitive Ensembles of Independent Mechanisms
Viaarxiv icon

LazyFormer: Self Attention with Lazy Update

Add code
Bookmark button
Alert button
Feb 25, 2021
Chengxuan Ying, Guolin Ke, Di He, Tie-Yan Liu

Figure 1 for LazyFormer: Self Attention with Lazy Update
Figure 2 for LazyFormer: Self Attention with Lazy Update
Figure 3 for LazyFormer: Self Attention with Lazy Update
Figure 4 for LazyFormer: Self Attention with Lazy Update
Viaarxiv icon

Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder

Add code
Bookmark button
Alert button
Feb 18, 2021
Shuqi Lu, Chenyan Xiong, Di He, Guolin Ke, Waleed Malik, Zhicheng Dou, Paul Bennett, Tieyan Liu, Arnold Overwijk

Figure 1 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Figure 2 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Figure 3 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Figure 4 for Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder
Viaarxiv icon