Picture for Fuli Luo

Fuli Luo

Towards Unified Prompt Tuning for Few-shot Text Classification

Add code
May 11, 2022
Figure 1 for Towards Unified Prompt Tuning for Few-shot Text Classification
Figure 2 for Towards Unified Prompt Tuning for Few-shot Text Classification
Figure 3 for Towards Unified Prompt Tuning for Few-shot Text Classification
Figure 4 for Towards Unified Prompt Tuning for Few-shot Text Classification
Viaarxiv icon

On Effectively Learning of Knowledge in Continual Pre-training

Add code
Apr 17, 2022
Figure 1 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 2 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 3 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 4 for On Effectively Learning of Knowledge in Continual Pre-training
Viaarxiv icon

Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency

Add code
Apr 06, 2022
Figure 1 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 2 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 3 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 4 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Viaarxiv icon

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

Add code
Apr 01, 2022
Figure 1 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 2 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 3 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 4 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Viaarxiv icon

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

Add code
Dec 14, 2021
Figure 1 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Figure 2 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Figure 3 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Figure 4 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Viaarxiv icon

Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning

Add code
Sep 13, 2021
Figure 1 for Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Figure 2 for Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Figure 3 for Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Figure 4 for Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Viaarxiv icon

SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels

Add code
Mar 14, 2021
Figure 1 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 2 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 3 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 4 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Viaarxiv icon

CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations

Add code
Oct 30, 2020
Figure 1 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Figure 2 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Figure 3 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Figure 4 for CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations
Viaarxiv icon

VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation

Add code
Oct 30, 2020
Figure 1 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 2 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 3 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 4 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Viaarxiv icon

Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions

Add code
Sep 27, 2020
Figure 1 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Figure 2 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Figure 3 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Figure 4 for Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
Viaarxiv icon