Picture for Chenliang Li

Chenliang Li

Automatic Expert Selection for Multi-Scenario and Multi-Task Search

Add code
Jun 06, 2022
Figure 1 for Automatic Expert Selection for Multi-Scenario and Multi-Task Search
Figure 2 for Automatic Expert Selection for Multi-Scenario and Multi-Task Search
Figure 3 for Automatic Expert Selection for Multi-Scenario and Multi-Task Search
Figure 4 for Automatic Expert Selection for Multi-Scenario and Multi-Task Search
Viaarxiv icon

mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections

Add code
May 25, 2022
Figure 1 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Figure 2 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Figure 3 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Figure 4 for mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
Viaarxiv icon

Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation

Add code
May 09, 2022
Figure 1 for Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation
Figure 2 for Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation
Figure 3 for Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation
Figure 4 for Price DOES Matter! Modeling Price and Interest Preferences in Session-based Recommendation
Viaarxiv icon

When Multi-Level Meets Multi-Interest: A Multi-Grained Neural Model for Sequential Recommendation

Add code
May 03, 2022
Figure 1 for When Multi-Level Meets Multi-Interest: A Multi-Grained Neural Model for Sequential Recommendation
Figure 2 for When Multi-Level Meets Multi-Interest: A Multi-Grained Neural Model for Sequential Recommendation
Figure 3 for When Multi-Level Meets Multi-Interest: A Multi-Grained Neural Model for Sequential Recommendation
Figure 4 for When Multi-Level Meets Multi-Interest: A Multi-Grained Neural Model for Sequential Recommendation
Viaarxiv icon

Knowledge Graph Contrastive Learning for Recommendation

Add code
May 02, 2022
Figure 1 for Knowledge Graph Contrastive Learning for Recommendation
Figure 2 for Knowledge Graph Contrastive Learning for Recommendation
Figure 3 for Knowledge Graph Contrastive Learning for Recommendation
Figure 4 for Knowledge Graph Contrastive Learning for Recommendation
Viaarxiv icon

Recommender May Not Favor Loyal Users

Add code
Apr 12, 2022
Figure 1 for Recommender May Not Favor Loyal Users
Figure 2 for Recommender May Not Favor Loyal Users
Figure 3 for Recommender May Not Favor Loyal Users
Figure 4 for Recommender May Not Favor Loyal Users
Viaarxiv icon

Achieving Human Parity on Visual Question Answering

Add code
Nov 19, 2021
Figure 1 for Achieving Human Parity on Visual Question Answering
Figure 2 for Achieving Human Parity on Visual Question Answering
Figure 3 for Achieving Human Parity on Visual Question Answering
Figure 4 for Achieving Human Parity on Visual Question Answering
Viaarxiv icon

Concept-Aware Denoising Graph Neural Network for Micro-Video Recommendation

Add code
Sep 28, 2021
Figure 1 for Concept-Aware Denoising Graph Neural Network for Micro-Video Recommendation
Figure 2 for Concept-Aware Denoising Graph Neural Network for Micro-Video Recommendation
Figure 3 for Concept-Aware Denoising Graph Neural Network for Micro-Video Recommendation
Figure 4 for Concept-Aware Denoising Graph Neural Network for Micro-Video Recommendation
Viaarxiv icon

Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training

Add code
Aug 21, 2021
Figure 1 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Figure 2 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Figure 3 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Figure 4 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Viaarxiv icon

E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning

Add code
Jun 04, 2021
Figure 1 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Figure 2 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Figure 3 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Figure 4 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Viaarxiv icon