Picture for Minghui Qiu

Minghui Qiu

DSA, Hong Kong University of Science and Technology, Guangzhou

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

Add code
Apr 01, 2022
Figure 1 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 2 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 3 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 4 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Viaarxiv icon

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

Add code
Dec 02, 2021
Figure 1 for DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
Figure 2 for DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
Figure 3 for DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
Figure 4 for DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
Viaarxiv icon

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression

Add code
Oct 16, 2021
Figure 1 for HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
Figure 2 for HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
Figure 3 for HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
Figure 4 for HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
Viaarxiv icon

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

Add code
Aug 20, 2021
Figure 1 for SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining
Figure 2 for SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining
Figure 3 for SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining
Figure 4 for SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining
Viaarxiv icon

Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification

Add code
Jul 26, 2021
Figure 1 for Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification
Figure 2 for Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification
Figure 3 for Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification
Figure 4 for Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification
Viaarxiv icon

Global Context Enhanced Graph Neural Networks for Session-based Recommendation

Add code
Jun 09, 2021
Figure 1 for Global Context Enhanced Graph Neural Networks for Session-based Recommendation
Figure 2 for Global Context Enhanced Graph Neural Networks for Session-based Recommendation
Figure 3 for Global Context Enhanced Graph Neural Networks for Session-based Recommendation
Figure 4 for Global Context Enhanced Graph Neural Networks for Session-based Recommendation
Viaarxiv icon

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Add code
Apr 15, 2021
Figure 1 for Kaleido-BERT: Vision-Language Pre-training on Fashion Domain
Figure 2 for Kaleido-BERT: Vision-Language Pre-training on Fashion Domain
Figure 3 for Kaleido-BERT: Vision-Language Pre-training on Fashion Domain
Figure 4 for Kaleido-BERT: Vision-Language Pre-training on Fashion Domain
Viaarxiv icon

Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

Add code
Jan 20, 2021
Figure 1 for Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Figure 2 for Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Figure 3 for Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Figure 4 for Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Viaarxiv icon

Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

Add code
Dec 02, 2020
Figure 1 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Figure 2 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Figure 3 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Figure 4 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Viaarxiv icon

Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations

Add code
Nov 25, 2020
Figure 1 for Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations
Figure 2 for Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations
Figure 3 for Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations
Figure 4 for Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations
Viaarxiv icon