Picture for Aditya Siddhant

Aditya Siddhant

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Reinforced Self-Training (ReST) for Language Modeling

Add code
Aug 21, 2023
Figure 1 for Reinforced Self-Training (ReST) for Language Modeling
Figure 2 for Reinforced Self-Training (ReST) for Language Modeling
Figure 3 for Reinforced Self-Training (ReST) for Language Modeling
Figure 4 for Reinforced Self-Training (ReST) for Language Modeling
Viaarxiv icon

SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation

Add code
May 22, 2023
Figure 1 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Figure 2 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Figure 3 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Figure 4 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Viaarxiv icon

Dialect-robust Evaluation of Generated Text

Add code
Nov 02, 2022
Figure 1 for Dialect-robust Evaluation of Generated Text
Figure 2 for Dialect-robust Evaluation of Generated Text
Figure 3 for Dialect-robust Evaluation of Generated Text
Figure 4 for Dialect-robust Evaluation of Generated Text
Viaarxiv icon

Building Machine Translation Systems for the Next Thousand Languages

Add code
May 16, 2022
Figure 1 for Building Machine Translation Systems for the Next Thousand Languages
Figure 2 for Building Machine Translation Systems for the Next Thousand Languages
Figure 3 for Building Machine Translation Systems for the Next Thousand Languages
Figure 4 for Building Machine Translation Systems for the Next Thousand Languages
Viaarxiv icon

Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning

Add code
Jan 13, 2022
Figure 1 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Figure 2 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Figure 3 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Figure 4 for Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning
Viaarxiv icon

DOCmT5: Document-Level Pretraining of Multilingual Language Models

Add code
Dec 16, 2021
Figure 1 for DOCmT5: Document-Level Pretraining of Multilingual Language Models
Figure 2 for DOCmT5: Document-Level Pretraining of Multilingual Language Models
Figure 3 for DOCmT5: Document-Level Pretraining of Multilingual Language Models
Figure 4 for DOCmT5: Document-Level Pretraining of Multilingual Language Models
Viaarxiv icon

nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?

Add code
Jun 03, 2021
Figure 1 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 2 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 3 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 4 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Viaarxiv icon

XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation

Add code
Apr 15, 2021
Figure 1 for XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
Figure 2 for XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
Figure 3 for XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
Figure 4 for XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
Viaarxiv icon