Alert button
Picture for Barlas Oguz

Barlas Oguz

Alert button

The Role of Chain-of-Thought in Complex Vision-Language Reasoning Task

Add code
Bookmark button
Alert button
Nov 15, 2023
Yifan Wu, Pengchuan Zhang, Wenhan Xiong, Barlas Oguz, James C. Gee, Yixin Nie

Viaarxiv icon

Jointly Training Large Autoregressive Multimodal Models

Add code
Bookmark button
Alert button
Sep 28, 2023
Emanuele Aiello, Lili Yu, Yixin Nie, Armen Aghajanyan, Barlas Oguz

Figure 1 for Jointly Training Large Autoregressive Multimodal Models
Figure 2 for Jointly Training Large Autoregressive Multimodal Models
Figure 3 for Jointly Training Large Autoregressive Multimodal Models
Figure 4 for Jointly Training Large Autoregressive Multimodal Models
Viaarxiv icon

Effective Long-Context Scaling of Foundation Models

Add code
Bookmark button
Alert button
Sep 27, 2023
Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma

Figure 1 for Effective Long-Context Scaling of Foundation Models
Figure 2 for Effective Long-Context Scaling of Foundation Models
Figure 3 for Effective Long-Context Scaling of Foundation Models
Figure 4 for Effective Long-Context Scaling of Foundation Models
Viaarxiv icon

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Add code
Bookmark button
Alert button
Jun 08, 2023
Ganesh Jawahar, Haichuan Yang, Yunyang Xiong, Zechun Liu, Dilin Wang, Fei Sun, Meng Li, Aasish Pappu, Barlas Oguz, Muhammad Abdul-Mageed, Laks V. S. Lakshmanan, Raghuraman Krishnamoorthi, Vikas Chandra

Figure 1 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 2 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 3 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 4 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Viaarxiv icon

Binary and Ternary Natural Language Generation

Add code
Bookmark button
Alert button
Jun 02, 2023
Zechun Liu, Barlas Oguz, Aasish Pappu, Yangyang Shi, Raghuraman Krishnamoorthi

Figure 1 for Binary and Ternary Natural Language Generation
Figure 2 for Binary and Ternary Natural Language Generation
Figure 3 for Binary and Ternary Natural Language Generation
Figure 4 for Binary and Ternary Natural Language Generation
Viaarxiv icon

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

Add code
Bookmark button
Alert button
May 29, 2023
Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, Vikas Chandra

Figure 1 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 2 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 3 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 4 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Viaarxiv icon

How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval

Add code
Bookmark button
Alert button
Feb 15, 2023
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, Xilun Chen

Figure 1 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 2 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 3 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 4 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Viaarxiv icon

CITADEL: Conditional Token Interaction via Dynamic Lexical Routing for Efficient and Effective Multi-Vector Retrieval

Add code
Bookmark button
Alert button
Nov 18, 2022
Minghan Li, Sheng-Chieh Lin, Barlas Oguz, Asish Ghoshal, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, Xilun Chen

Figure 1 for CITADEL: Conditional Token Interaction via Dynamic Lexical Routing for Efficient and Effective Multi-Vector Retrieval
Figure 2 for CITADEL: Conditional Token Interaction via Dynamic Lexical Routing for Efficient and Effective Multi-Vector Retrieval
Figure 3 for CITADEL: Conditional Token Interaction via Dynamic Lexical Routing for Efficient and Effective Multi-Vector Retrieval
Figure 4 for CITADEL: Conditional Token Interaction via Dynamic Lexical Routing for Efficient and Effective Multi-Vector Retrieval
Viaarxiv icon

Bridging the Training-Inference Gap for Dense Phrase Retrieval

Add code
Bookmark button
Alert button
Oct 25, 2022
Gyuwan Kim, Jinhyuk Lee, Barlas Oguz, Wenhan Xiong, Yizhe Zhang, Yashar Mehdad, William Yang Wang

Figure 1 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Figure 2 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Figure 3 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Figure 4 for Bridging the Training-Inference Gap for Dense Phrase Retrieval
Viaarxiv icon

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Add code
Bookmark button
Alert button
Oct 04, 2022
Man Luo, Shashank Jain, Anchit Gupta, Arash Einolghozati, Barlas Oguz, Debojeet Chatterjee, Xilun Chen, Chitta Baral, Peyman Heidari

Figure 1 for A Study on the Efficiency and Generalization of Light Hybrid Retrievers
Figure 2 for A Study on the Efficiency and Generalization of Light Hybrid Retrievers
Figure 3 for A Study on the Efficiency and Generalization of Light Hybrid Retrievers
Figure 4 for A Study on the Efficiency and Generalization of Light Hybrid Retrievers
Viaarxiv icon