Picture for Chuhan Wu

Chuhan Wu

UserBERT: Contrastive User Model Pre-training

Add code
Sep 03, 2021
Figure 1 for UserBERT: Contrastive User Model Pre-training
Figure 2 for UserBERT: Contrastive User Model Pre-training
Figure 3 for UserBERT: Contrastive User Model Pre-training
Figure 4 for UserBERT: Contrastive User Model Pre-training
Viaarxiv icon

Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer

Add code
Sep 02, 2021
Figure 1 for Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer
Figure 2 for Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer
Figure 3 for Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer
Figure 4 for Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer
Viaarxiv icon

FedKD: Communication Efficient Federated Learning via Knowledge Distillation

Add code
Aug 30, 2021
Figure 1 for FedKD: Communication Efficient Federated Learning via Knowledge Distillation
Figure 2 for FedKD: Communication Efficient Federated Learning via Knowledge Distillation
Figure 3 for FedKD: Communication Efficient Federated Learning via Knowledge Distillation
Figure 4 for FedKD: Communication Efficient Federated Learning via Knowledge Distillation
Viaarxiv icon

Is News Recommendation a Sequential Recommendation Task?

Add code
Aug 26, 2021
Figure 1 for Is News Recommendation a Sequential Recommendation Task?
Figure 2 for Is News Recommendation a Sequential Recommendation Task?
Figure 3 for Is News Recommendation a Sequential Recommendation Task?
Figure 4 for Is News Recommendation a Sequential Recommendation Task?
Viaarxiv icon

Personalized News Recommendation: A Survey

Add code
Jul 08, 2021
Figure 1 for Personalized News Recommendation: A Survey
Figure 2 for Personalized News Recommendation: A Survey
Figure 3 for Personalized News Recommendation: A Survey
Figure 4 for Personalized News Recommendation: A Survey
Viaarxiv icon

DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning

Add code
Jun 11, 2021
Figure 1 for DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning
Figure 2 for DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning
Figure 3 for DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning
Figure 4 for DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning
Viaarxiv icon

PP-Rec: News Recommendation with Personalized User Interest and Time-aware News Popularity

Add code
Jun 10, 2021
Figure 1 for PP-Rec: News Recommendation with Personalized User Interest and Time-aware News Popularity
Figure 2 for PP-Rec: News Recommendation with Personalized User Interest and Time-aware News Popularity
Figure 3 for PP-Rec: News Recommendation with Personalized User Interest and Time-aware News Popularity
Figure 4 for PP-Rec: News Recommendation with Personalized User Interest and Time-aware News Popularity
Viaarxiv icon

HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation

Add code
Jun 08, 2021
Figure 1 for HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation
Figure 2 for HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation
Figure 3 for HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation
Figure 4 for HieRec: Hierarchical User Interest Modeling for Personalized News Recommendation
Viaarxiv icon

Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling

Add code
Jun 02, 2021
Figure 1 for Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling
Figure 2 for Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling
Figure 3 for Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling
Figure 4 for Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling
Viaarxiv icon

One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers

Add code
Jun 02, 2021
Figure 1 for One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
Figure 2 for One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
Figure 3 for One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
Figure 4 for One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
Viaarxiv icon