Picture for Fandong Meng

Fandong Meng

Towards Unifying Multi-Lingual and Cross-Lingual Summarization

Add code
May 16, 2023
Figure 1 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 2 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 3 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 4 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Viaarxiv icon

RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training

Add code
May 13, 2023
Figure 1 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Figure 2 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Figure 3 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Figure 4 for RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training
Viaarxiv icon

WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents

Add code
May 11, 2023
Figure 1 for WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents
Figure 2 for WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents
Figure 3 for WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents
Figure 4 for WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents
Viaarxiv icon

Investigating Forgetting in Pre-Trained Representations Through Continual Learning

Add code
May 10, 2023
Figure 1 for Investigating Forgetting in Pre-Trained Representations Through Continual Learning
Figure 2 for Investigating Forgetting in Pre-Trained Representations Through Continual Learning
Figure 3 for Investigating Forgetting in Pre-Trained Representations Through Continual Learning
Figure 4 for Investigating Forgetting in Pre-Trained Representations Through Continual Learning
Viaarxiv icon

Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias

Add code
May 08, 2023
Figure 1 for Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Figure 2 for Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Figure 3 for Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Figure 4 for Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias
Viaarxiv icon

BranchNorm: Robustly Scaling Extremely Deep Transformers

Add code
May 04, 2023
Viaarxiv icon

Unified Model Learning for Various Neural Machine Translation

Add code
May 04, 2023
Viaarxiv icon

Is ChatGPT a Good NLG Evaluator? A Preliminary Study

Add code
Mar 07, 2023
Figure 1 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Figure 2 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Figure 3 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Figure 4 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Viaarxiv icon

Cross-Lingual Summarization via ChatGPT

Add code
Feb 28, 2023
Figure 1 for Cross-Lingual Summarization via ChatGPT
Figure 2 for Cross-Lingual Summarization via ChatGPT
Figure 3 for Cross-Lingual Summarization via ChatGPT
Figure 4 for Cross-Lingual Summarization via ChatGPT
Viaarxiv icon

A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation

Add code
Jan 27, 2023
Figure 1 for A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation
Figure 2 for A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation
Figure 3 for A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation
Figure 4 for A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation
Viaarxiv icon