Picture for Shuohuan Wang

Shuohuan Wang

Dual Modalities of Text: Visual and Textual Generative Pre-training

Add code
Apr 17, 2024
Figure 1 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Figure 2 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Figure 3 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Figure 4 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Viaarxiv icon

On Training Data Influence of GPT Models

Add code
Apr 11, 2024
Figure 1 for On Training Data Influence of GPT Models
Figure 2 for On Training Data Influence of GPT Models
Figure 3 for On Training Data Influence of GPT Models
Figure 4 for On Training Data Influence of GPT Models
Viaarxiv icon

Tool-Augmented Reward Modeling

Add code
Oct 02, 2023
Figure 1 for Tool-Augmented Reward Modeling
Figure 2 for Tool-Augmented Reward Modeling
Figure 3 for Tool-Augmented Reward Modeling
Figure 4 for Tool-Augmented Reward Modeling
Viaarxiv icon

ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models

Add code
Feb 09, 2023
Viaarxiv icon

ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages

Add code
Dec 13, 2022
Figure 1 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 2 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 3 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 4 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Viaarxiv icon

X-PuDu at SemEval-2022 Task 6: Multilingual Learning for English and Arabic Sarcasm Detection

Add code
Nov 30, 2022
Viaarxiv icon

X-PuDu at SemEval-2022 Task 7: A Replaced Token Detection Task Pre-trained Model with Pattern-aware Ensembling for Identifying Plausible Clarifications

Add code
Nov 27, 2022
Viaarxiv icon

ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation

Add code
Nov 09, 2022
Viaarxiv icon

ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual Multi-Speaker Text-to-Speech

Add code
Nov 07, 2022
Figure 1 for ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual Multi-Speaker Text-to-Speech
Figure 2 for ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual Multi-Speaker Text-to-Speech
Figure 3 for ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual Multi-Speaker Text-to-Speech
Figure 4 for ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual Multi-Speaker Text-to-Speech
Viaarxiv icon

Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters

Add code
May 19, 2022
Figure 1 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Figure 2 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Figure 3 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Figure 4 for Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters
Viaarxiv icon