Alert button
Picture for Armand Joulin

Armand Joulin

Alert button

Gemma: Open Models Based on Gemini Research and Technology

Mar 13, 2024
Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Pier Giuseppe Sessa, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu-hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, Kathleen Kenealy

Viaarxiv icon

Scalable Pre-training of Large Autoregressive Image Models

Jan 16, 2024
Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista, Alexander Toshev, Vaishaal Shankar, Joshua M Susskind, Armand Joulin

Viaarxiv icon

PaSS: Parallel Speculative Sampling

Nov 22, 2023
Giovanni Monea, Armand Joulin, Edouard Grave

Viaarxiv icon

ImageBind: One Embedding Space To Bind Them All

May 09, 2023
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

Figure 1 for ImageBind: One Embedding Space To Bind Them All
Figure 2 for ImageBind: One Embedding Space To Bind Them All
Figure 3 for ImageBind: One Embedding Space To Bind Them All
Figure 4 for ImageBind: One Embedding Space To Bind Them All
Viaarxiv icon

DINOv2: Learning Robust Visual Features without Supervision

Apr 14, 2023
Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski

Figure 1 for DINOv2: Learning Robust Visual Features without Supervision
Figure 2 for DINOv2: Learning Robust Visual Features without Supervision
Figure 3 for DINOv2: Learning Robust Visual Features without Supervision
Figure 4 for DINOv2: Learning Robust Visual Features without Supervision
Viaarxiv icon

The effectiveness of MAE pre-pretraining for billion-scale pretraining

Mar 23, 2023
Mannat Singh, Quentin Duval, Kalyan Vasudev Alwala, Haoqi Fan, Vaibhav Aggarwal, Aaron Adcock, Armand Joulin, Piotr Dollár, Christoph Feichtenhofer, Ross Girshick, Rohit Girdhar, Ishan Misra

Figure 1 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Figure 2 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Figure 3 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Figure 4 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Viaarxiv icon

LLaMA: Open and Efficient Foundation Language Models

Feb 27, 2023
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample

Figure 1 for LLaMA: Open and Efficient Foundation Language Models
Figure 2 for LLaMA: Open and Efficient Foundation Language Models
Figure 3 for LLaMA: Open and Efficient Foundation Language Models
Figure 4 for LLaMA: Open and Efficient Foundation Language Models
Viaarxiv icon

Few-shot Learning with Retrieval Augmented Language Models

Aug 08, 2022
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave

Figure 1 for Few-shot Learning with Retrieval Augmented Language Models
Figure 2 for Few-shot Learning with Retrieval Augmented Language Models
Figure 3 for Few-shot Learning with Retrieval Augmented Language Models
Figure 4 for Few-shot Learning with Retrieval Augmented Language Models
Viaarxiv icon

Improving Wikipedia Verifiability with AI

Jul 08, 2022
Fabio Petroni, Samuel Broscheit, Aleksandra Piktus, Patrick Lewis, Gautier Izacard, Lucas Hosseini, Jane Dwivedi-Yu, Maria Lomeli, Timo Schick, Pierre-Emmanuel Mazaré, Armand Joulin, Edouard Grave, Sebastian Riedel

Figure 1 for Improving Wikipedia Verifiability with AI
Figure 2 for Improving Wikipedia Verifiability with AI
Figure 3 for Improving Wikipedia Verifiability with AI
Figure 4 for Improving Wikipedia Verifiability with AI
Viaarxiv icon

OmniMAE: Single Model Masked Pretraining on Images and Videos

Jun 16, 2022
Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

Figure 1 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 2 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 3 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 4 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Viaarxiv icon