Picture for Zewen Chi

Zewen Chi

Language Models are General-Purpose Interfaces

Add code
Jun 13, 2022
Figure 1 for Language Models are General-Purpose Interfaces
Figure 2 for Language Models are General-Purpose Interfaces
Figure 3 for Language Models are General-Purpose Interfaces
Figure 4 for Language Models are General-Purpose Interfaces
Viaarxiv icon

On the Representation Collapse of Sparse Mixture of Experts

Add code
Apr 20, 2022
Figure 1 for On the Representation Collapse of Sparse Mixture of Experts
Figure 2 for On the Representation Collapse of Sparse Mixture of Experts
Figure 3 for On the Representation Collapse of Sparse Mixture of Experts
Figure 4 for On the Representation Collapse of Sparse Mixture of Experts
Viaarxiv icon

Cross-Lingual Phrase Retrieval

Add code
Apr 19, 2022
Figure 1 for Cross-Lingual Phrase Retrieval
Figure 2 for Cross-Lingual Phrase Retrieval
Figure 3 for Cross-Lingual Phrase Retrieval
Figure 4 for Cross-Lingual Phrase Retrieval
Viaarxiv icon

Bridging the Gap: Cross-Lingual Summarization with Compression Rate

Add code
Oct 15, 2021
Figure 1 for Bridging the Gap: Cross-Lingual Summarization with Compression Rate
Figure 2 for Bridging the Gap: Cross-Lingual Summarization with Compression Rate
Figure 3 for Bridging the Gap: Cross-Lingual Summarization with Compression Rate
Figure 4 for Bridging the Gap: Cross-Lingual Summarization with Compression Rate
Viaarxiv icon

Cross-Lingual Language Model Meta-Pretraining

Add code
Sep 23, 2021
Figure 1 for Cross-Lingual Language Model Meta-Pretraining
Figure 2 for Cross-Lingual Language Model Meta-Pretraining
Figure 3 for Cross-Lingual Language Model Meta-Pretraining
Figure 4 for Cross-Lingual Language Model Meta-Pretraining
Viaarxiv icon

XLM-E: Cross-lingual Language Model Pre-training via ELECTRA

Add code
Jun 30, 2021
Figure 1 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 2 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 3 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 4 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Viaarxiv icon

Consistency Regularization for Cross-Lingual Fine-Tuning

Add code
Jun 15, 2021
Figure 1 for Consistency Regularization for Cross-Lingual Fine-Tuning
Figure 2 for Consistency Regularization for Cross-Lingual Fine-Tuning
Figure 3 for Consistency Regularization for Cross-Lingual Fine-Tuning
Figure 4 for Consistency Regularization for Cross-Lingual Fine-Tuning
Viaarxiv icon

Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment

Add code
Jun 11, 2021
Figure 1 for Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
Figure 2 for Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
Figure 3 for Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
Figure 4 for Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
Viaarxiv icon

mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs

Add code
Apr 18, 2021
Figure 1 for mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
Figure 2 for mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
Figure 3 for mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
Figure 4 for mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
Viaarxiv icon

A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition

Add code
Jan 02, 2021
Figure 1 for A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition
Figure 2 for A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition
Figure 3 for A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition
Figure 4 for A Robust and Domain-Adaptive Approach for Low-Resource Named Entity Recognition
Viaarxiv icon