Alert button
Picture for Gargi Ghosh

Gargi Ghosh

Alert button

Demystifying CLIP Data

Add code
Bookmark button
Alert button
Oct 02, 2023
Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer

Figure 1 for Demystifying CLIP Data
Figure 2 for Demystifying CLIP Data
Figure 3 for Demystifying CLIP Data
Figure 4 for Demystifying CLIP Data
Viaarxiv icon

Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning

Add code
Bookmark button
Alert button
Sep 05, 2023
Lili Yu, Bowen Shi, Ramakanth Pasunuru, Benjamin Muller, Olga Golovneva, Tianlu Wang, Arun Babu, Binh Tang, Brian Karrer, Shelly Sheynin, Candace Ross, Adam Polyak, Russell Howes, Vasu Sharma, Puxin Xu, Hovhannes Tamoyan, Oron Ashual, Uriel Singer, Shang-Wen Li, Susan Zhang, Richard James, Gargi Ghosh, Yaniv Taigman, Maryam Fazel-Zarandi, Asli Celikyilmaz, Luke Zettlemoyer, Armen Aghajanyan

Figure 1 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 2 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 3 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 4 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Viaarxiv icon

LIMA: Less Is More for Alignment

Add code
Bookmark button
Alert button
May 18, 2023
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy

Figure 1 for LIMA: Less Is More for Alignment
Figure 2 for LIMA: Less Is More for Alignment
Figure 3 for LIMA: Less Is More for Alignment
Figure 4 for LIMA: Less Is More for Alignment
Viaarxiv icon

CiT: Curation in Training for Effective Vision-Language Data

Add code
Bookmark button
Alert button
Jan 05, 2023
Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer

Figure 1 for CiT: Curation in Training for Effective Vision-Language Data
Figure 2 for CiT: Curation in Training for Effective Vision-Language Data
Figure 3 for CiT: Curation in Training for Effective Vision-Language Data
Figure 4 for CiT: Curation in Training for Effective Vision-Language Data
Viaarxiv icon

ALERT: Adapting Language Models to Reasoning Tasks

Add code
Bookmark button
Alert button
Dec 16, 2022
Ping Yu, Tianlu Wang, Olga Golovneva, Badr Alkhamissy, Gargi Ghosh, Mona Diab, Asli Celikyilmaz

Figure 1 for ALERT: Adapting Language Models to Reasoning Tasks
Figure 2 for ALERT: Adapting Language Models to Reasoning Tasks
Figure 3 for ALERT: Adapting Language Models to Reasoning Tasks
Figure 4 for ALERT: Adapting Language Models to Reasoning Tasks
Viaarxiv icon

MAViL: Masked Audio-Video Learners

Add code
Bookmark button
Alert button
Dec 15, 2022
Po-Yao Huang, Vasu Sharma, Hu Xu, Chaitanya Ryali, Haoqi Fan, Yanghao Li, Shang-Wen Li, Gargi Ghosh, Jitendra Malik, Christoph Feichtenhofer

Figure 1 for MAViL: Masked Audio-Video Learners
Figure 2 for MAViL: Masked Audio-Video Learners
Figure 3 for MAViL: Masked Audio-Video Learners
Figure 4 for MAViL: Masked Audio-Video Learners
Viaarxiv icon

CM3: A Causal Masked Multimodal Model of the Internet

Add code
Bookmark button
Alert button
Jan 19, 2022
Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer

Figure 1 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 2 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 3 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 4 for CM3: A Causal Masked Multimodal Model of the Internet
Viaarxiv icon

VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding

Add code
Bookmark button
Alert button
Oct 01, 2021
Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer

Figure 1 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 2 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 3 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 4 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Viaarxiv icon

HTLM: Hyper-Text Pre-Training and Prompting of Language Models

Add code
Bookmark button
Alert button
Jul 14, 2021
Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer

Figure 1 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 2 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 3 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 4 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Viaarxiv icon

VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding

Add code
Bookmark button
Alert button
May 20, 2021
Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer

Figure 1 for VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding
Figure 2 for VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding
Figure 3 for VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding
Figure 4 for VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding
Viaarxiv icon