Picture for Hua Wu

Hua Wu

UNIMO-2: End-to-End Unified Vision-Language Grounded Learning

Add code
Mar 17, 2022
Figure 1 for UNIMO-2: End-to-End Unified Vision-Language Grounded Learning
Figure 2 for UNIMO-2: End-to-End Unified Vision-Language Grounded Learning
Figure 3 for UNIMO-2: End-to-End Unified Vision-Language Grounded Learning
Figure 4 for UNIMO-2: End-to-End Unified Vision-Language Grounded Learning
Viaarxiv icon

DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training

Add code
Mar 17, 2022
Figure 1 for DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training
Figure 2 for DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training
Figure 3 for DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training
Figure 4 for DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training
Viaarxiv icon

Long Time No See! Open-Domain Conversation with Long-Term Persona Memory

Add code
Mar 14, 2022
Figure 1 for Long Time No See! Open-Domain Conversation with Long-Term Persona Memory
Figure 2 for Long Time No See! Open-Domain Conversation with Long-Term Persona Memory
Figure 3 for Long Time No See! Open-Domain Conversation with Long-Term Persona Memory
Figure 4 for Long Time No See! Open-Domain Conversation with Long-Term Persona Memory
Viaarxiv icon

Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods

Add code
Mar 10, 2022
Figure 1 for Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods
Figure 2 for Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods
Figure 3 for Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods
Figure 4 for Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods
Viaarxiv icon

ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation

Add code
Dec 31, 2021
Figure 1 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Figure 2 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Figure 3 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Figure 4 for ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation
Viaarxiv icon

ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation

Add code
Dec 23, 2021
Figure 1 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 2 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 3 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Figure 4 for ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Viaarxiv icon

TOD-DA: Towards Boosting the Robustness of Task-oriented Dialogue Modeling on Spoken Conversations

Add code
Dec 23, 2021
Figure 1 for TOD-DA: Towards Boosting the Robustness of Task-oriented Dialogue Modeling on Spoken Conversations
Figure 2 for TOD-DA: Towards Boosting the Robustness of Task-oriented Dialogue Modeling on Spoken Conversations
Figure 3 for TOD-DA: Towards Boosting the Robustness of Task-oriented Dialogue Modeling on Spoken Conversations
Figure 4 for TOD-DA: Towards Boosting the Robustness of Task-oriented Dialogue Modeling on Spoken Conversations
Viaarxiv icon

DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models

Add code
Dec 16, 2021
Figure 1 for DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models
Figure 2 for DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models
Figure 3 for DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models
Figure 4 for DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models
Viaarxiv icon

CELLS: Cost-Effective Evolution in Latent Space for Goal-Directed Molecular Generation

Add code
Dec 05, 2021
Figure 1 for CELLS: Cost-Effective Evolution in Latent Space for Goal-Directed Molecular Generation
Figure 2 for CELLS: Cost-Effective Evolution in Latent Space for Goal-Directed Molecular Generation
Figure 3 for CELLS: Cost-Effective Evolution in Latent Space for Goal-Directed Molecular Generation
Figure 4 for CELLS: Cost-Effective Evolution in Latent Space for Goal-Directed Molecular Generation
Viaarxiv icon

Docking-based Virtual Screening with Multi-Task Learning

Add code
Nov 18, 2021
Figure 1 for Docking-based Virtual Screening with Multi-Task Learning
Figure 2 for Docking-based Virtual Screening with Multi-Task Learning
Figure 3 for Docking-based Virtual Screening with Multi-Task Learning
Figure 4 for Docking-based Virtual Screening with Multi-Task Learning
Viaarxiv icon