Alert button
Picture for Amjad Almahairi

Amjad Almahairi

Alert button

Jack of All Tasks, Master of Many: Designing General-purpose Coarse-to-Fine Vision-Language Model

Add code
Bookmark button
Alert button
Dec 19, 2023
Shraman Pramanick, Guangxing Han, Rui Hou, Sayan Nag, Ser-Nam Lim, Nicolas Ballas, Qifan Wang, Rama Chellappa, Amjad Almahairi

Viaarxiv icon

Llama 2: Open Foundation and Fine-Tuned Chat Models

Add code
Bookmark button
Alert button
Jul 19, 2023
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom

Figure 1 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 2 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 3 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 4 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Viaarxiv icon

Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes

Add code
Bookmark button
Alert button
May 22, 2023
Kuan-Hao Huang, Liang Tan, Rui Hou, Sinong Wang, Amjad Almahairi, Ruty Rinott

Figure 1 for Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Figure 2 for Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Figure 3 for Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Figure 4 for Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes
Viaarxiv icon

Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization

Add code
Bookmark button
Alert button
May 06, 2023
Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Jimmy Ba, Amjad Almahairi

Figure 1 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Figure 2 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Figure 3 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Figure 4 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Viaarxiv icon

Progressive Prompts: Continual Learning for Language Models

Add code
Bookmark button
Alert button
Jan 29, 2023
Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi

Figure 1 for Progressive Prompts: Continual Learning for Language Models
Figure 2 for Progressive Prompts: Continual Learning for Language Models
Figure 3 for Progressive Prompts: Continual Learning for Language Models
Figure 4 for Progressive Prompts: Continual Learning for Language Models
Viaarxiv icon

Uniform Masking Prevails in Vision-Language Pretraining

Add code
Bookmark button
Alert button
Dec 10, 2022
Siddharth Verma, Yuchen Lu, Rui Hou, Hanchao Yu, Nicolas Ballas, Madian Khabsa, Amjad Almahairi

Figure 1 for Uniform Masking Prevails in Vision-Language Pretraining
Figure 2 for Uniform Masking Prevails in Vision-Language Pretraining
Figure 3 for Uniform Masking Prevails in Vision-Language Pretraining
Figure 4 for Uniform Masking Prevails in Vision-Language Pretraining
Viaarxiv icon

Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI

Add code
Bookmark button
Alert button
May 25, 2022
Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias

Figure 1 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Figure 2 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Figure 3 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Figure 4 for Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI
Viaarxiv icon

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning

Add code
Bookmark button
Alert button
Oct 14, 2021
Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, Madian Khabsa

Figure 1 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Figure 2 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Figure 3 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Figure 4 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Viaarxiv icon

Unsupervised Learning of Dense Visual Representations

Add code
Bookmark button
Alert button
Nov 11, 2020
Pedro O. Pinheiro, Amjad Almahairi, Ryan Y. Benmaleck, Florian Golemo, Aaron Courville

Figure 1 for Unsupervised Learning of Dense Visual Representations
Figure 2 for Unsupervised Learning of Dense Visual Representations
Figure 3 for Unsupervised Learning of Dense Visual Representations
Figure 4 for Unsupervised Learning of Dense Visual Representations
Viaarxiv icon

The Impact of Preprocessing on Arabic-English Statistical and Neural Machine Translation

Add code
Bookmark button
Alert button
Jun 27, 2019
Mai Oudah, Amjad Almahairi, Nizar Habash

Figure 1 for The Impact of Preprocessing on Arabic-English Statistical and Neural Machine Translation
Figure 2 for The Impact of Preprocessing on Arabic-English Statistical and Neural Machine Translation
Figure 3 for The Impact of Preprocessing on Arabic-English Statistical and Neural Machine Translation
Figure 4 for The Impact of Preprocessing on Arabic-English Statistical and Neural Machine Translation
Viaarxiv icon