Picture for Linting Xue

Linting Xue

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

PaLM 2 Technical Report

Add code
May 17, 2023
Figure 1 for PaLM 2 Technical Report
Figure 2 for PaLM 2 Technical Report
Figure 3 for PaLM 2 Technical Report
Figure 4 for PaLM 2 Technical Report
Viaarxiv icon

PaLI: A Jointly-Scaled Multilingual Language-Image Model

Add code
Sep 16, 2022
Figure 1 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 2 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 3 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 4 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Viaarxiv icon

Towards Multi-Lingual Visual Question Answering

Add code
Sep 12, 2022
Figure 1 for Towards Multi-Lingual Visual Question Answering
Figure 2 for Towards Multi-Lingual Visual Question Answering
Figure 3 for Towards Multi-Lingual Visual Question Answering
Figure 4 for Towards Multi-Lingual Visual Question Answering
Viaarxiv icon

nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?

Add code
Jun 03, 2021
Figure 1 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 2 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 3 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Figure 4 for nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?
Viaarxiv icon

ByT5: Towards a token-free future with pre-trained byte-to-byte models

Add code
May 28, 2021
Figure 1 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Figure 2 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Figure 3 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Figure 4 for ByT5: Towards a token-free future with pre-trained byte-to-byte models
Viaarxiv icon

mT5: A massively multilingual pre-trained text-to-text transformer

Add code
Oct 23, 2020
Figure 1 for mT5: A massively multilingual pre-trained text-to-text transformer
Figure 2 for mT5: A massively multilingual pre-trained text-to-text transformer
Figure 3 for mT5: A massively multilingual pre-trained text-to-text transformer
Figure 4 for mT5: A massively multilingual pre-trained text-to-text transformer
Viaarxiv icon

Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension

Add code
Oct 22, 2020
Figure 1 for Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension
Figure 2 for Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension
Figure 3 for Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension
Figure 4 for Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension
Viaarxiv icon