Picture for En-Shiun Annie Lee

En-Shiun Annie Lee

WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines

Add code
Oct 16, 2024
Viaarxiv icon

URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base

Add code
Sep 27, 2024
Figure 1 for URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base
Figure 2 for URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base
Figure 3 for URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base
Figure 4 for URIEL+: Enhancing Linguistic Inclusion and Usability in a Typological and Multilingual Knowledge Base
Viaarxiv icon

ProxyLM: Predicting Language Model Performance on Multilingual Tasks via Proxy Models

Add code
Jun 14, 2024
Viaarxiv icon

IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models

Add code
Jun 05, 2024
Viaarxiv icon

A Reproducibility Study on Quantifying Language Similarity: The Impact of Missing Values in the URIEL Knowledge Base

Add code
May 17, 2024
Viaarxiv icon

Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation

Add code
Apr 05, 2024
Viaarxiv icon

Enhancing Hokkien Dual Translation by Exploring and Standardizing of Four Writing Systems

Add code
Mar 18, 2024
Viaarxiv icon

Predicting Machine Translation Performance on Low-Resource Languages: The Role of Domain Similarity

Add code
Feb 04, 2024
Viaarxiv icon

Leveraging Auxiliary Domain Parallel Data in Intermediate Task Fine-tuning for Low-resource Translation

Add code
Jun 02, 2023
Viaarxiv icon

Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

Add code
Apr 09, 2022
Figure 1 for Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?
Figure 2 for Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?
Figure 3 for Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?
Figure 4 for Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?
Viaarxiv icon