Picture for Mia Chen

Mia Chen

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Development of Authenticated Clients and Applications for ICICLE CI Services -- Final Report for the REHS Program, June-August, 2022

Add code
Apr 17, 2023
Figure 1 for Development of Authenticated Clients and Applications for ICICLE CI Services -- Final Report for the REHS Program, June-August, 2022
Figure 2 for Development of Authenticated Clients and Applications for ICICLE CI Services -- Final Report for the REHS Program, June-August, 2022
Figure 3 for Development of Authenticated Clients and Applications for ICICLE CI Services -- Final Report for the REHS Program, June-August, 2022
Figure 4 for Development of Authenticated Clients and Applications for ICICLE CI Services -- Final Report for the REHS Program, June-August, 2022
Viaarxiv icon

Towards End-to-End In-Image Neural Machine Translation

Add code
Oct 20, 2020
Figure 1 for Towards End-to-End In-Image Neural Machine Translation
Figure 2 for Towards End-to-End In-Image Neural Machine Translation
Figure 3 for Towards End-to-End In-Image Neural Machine Translation
Figure 4 for Towards End-to-End In-Image Neural Machine Translation
Viaarxiv icon

Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation

Add code
May 11, 2020
Figure 1 for Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Figure 2 for Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Figure 3 for Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Figure 4 for Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Viaarxiv icon

Faster Transformer Decoding: N-gram Masked Self-Attention

Add code
Jan 14, 2020
Figure 1 for Faster Transformer Decoding: N-gram Masked Self-Attention
Figure 2 for Faster Transformer Decoding: N-gram Masked Self-Attention
Viaarxiv icon