Alert button
Picture for Daniel Cer

Daniel Cer

Alert button

Gemma: Open Models Based on Gemini Research and Technology

Mar 13, 2024
Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Pier Giuseppe Sessa, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu-hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, Kathleen Kenealy

Viaarxiv icon

Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval

Nov 10, 2023
Nandan Thakur, Jianmo Ni, Gustavo Hernández Ábrego, John Wieting, Jimmy Lin, Daniel Cer

Viaarxiv icon

Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts

Oct 10, 2022
Cicero Nogueira dos Santos, Zhe Dong, Daniel Cer, John Nham, Siamak Shakeri, Jianmo Ni, Yun-hsuan Sung

Figure 1 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 2 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 3 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 4 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Viaarxiv icon

Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation

May 25, 2022
Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant

Figure 1 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Figure 2 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Figure 3 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Figure 4 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Viaarxiv icon

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

Oct 15, 2021
Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, Daniel Cer

Figure 1 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 2 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 3 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 4 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Viaarxiv icon

A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations

Sep 10, 2021
Ziyi Yang, Yinfei Yang, Daniel Cer, Eric Darve

Figure 1 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Figure 2 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Figure 3 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Figure 4 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Viaarxiv icon

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

Aug 26, 2021
Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, Yinfei Yang

Figure 1 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 2 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 3 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 4 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Viaarxiv icon

NT5?! Training T5 to Perform Numerical Reasoning

Apr 15, 2021
Peng-Jian Yang, Ying Ting Chen, Yuechan Chen, Daniel Cer

Figure 1 for NT5?! Training T5 to Perform Numerical Reasoning
Figure 2 for NT5?! Training T5 to Perform Numerical Reasoning
Figure 3 for NT5?! Training T5 to Perform Numerical Reasoning
Figure 4 for NT5?! Training T5 to Perform Numerical Reasoning
Viaarxiv icon

Universal Sentence Representation Learning with Conditional Masked Language Model

Dec 29, 2020
Ziyi Yang, Yinfei Yang, Daniel Cer, Jax Law, Eric Darve

Figure 1 for Universal Sentence Representation Learning with Conditional Masked Language Model
Figure 2 for Universal Sentence Representation Learning with Conditional Masked Language Model
Figure 3 for Universal Sentence Representation Learning with Conditional Masked Language Model
Figure 4 for Universal Sentence Representation Learning with Conditional Masked Language Model
Viaarxiv icon