Picture for Hua Wu

Hua Wu

A Simple yet Effective Self-Debiasing Framework for Transformer Models

Add code
Jun 02, 2023
Figure 1 for A Simple yet Effective Self-Debiasing Framework for Transformer Models
Figure 2 for A Simple yet Effective Self-Debiasing Framework for Transformer Models
Figure 3 for A Simple yet Effective Self-Debiasing Framework for Transformer Models
Figure 4 for A Simple yet Effective Self-Debiasing Framework for Transformer Models
Viaarxiv icon

Learning In-context Learning for Named Entity Recognition

Add code
May 26, 2023
Viaarxiv icon

TOME: A Two-stage Approach for Model-based Retrieval

Add code
May 18, 2023
Viaarxiv icon

Improving Zero-shot Multilingual Neural Machine Translation by Leveraging Cross-lingual Consistency Regularization

Add code
May 12, 2023
Viaarxiv icon

SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases

Add code
Feb 28, 2023
Figure 1 for SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Figure 2 for SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Figure 3 for SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Figure 4 for SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases
Viaarxiv icon

ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models

Add code
Feb 09, 2023
Viaarxiv icon

Universal Information Extraction as Unified Semantic Matching

Add code
Jan 09, 2023
Figure 1 for Universal Information Extraction as Unified Semantic Matching
Figure 2 for Universal Information Extraction as Unified Semantic Matching
Figure 3 for Universal Information Extraction as Unified Semantic Matching
Figure 4 for Universal Information Extraction as Unified Semantic Matching
Viaarxiv icon

ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization

Add code
Jan 09, 2023
Figure 1 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Figure 2 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Figure 3 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Figure 4 for ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization
Viaarxiv icon

Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling

Add code
Dec 19, 2022
Figure 1 for Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling
Figure 2 for Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling
Figure 3 for Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling
Figure 4 for Query Enhanced Knowledge-Intensive Conversation via Unsupervised Joint Modeling
Viaarxiv icon

ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages

Add code
Dec 13, 2022
Figure 1 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 2 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 3 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 4 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Viaarxiv icon