Alert button
Picture for Dmytro Okhonko

Dmytro Okhonko

Alert button

LegoNN: Building Modular Encoder-Decoder Models

Add code
Bookmark button
Alert button
Jun 07, 2022
Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed

Figure 1 for LegoNN: Building Modular Encoder-Decoder Models
Figure 2 for LegoNN: Building Modular Encoder-Decoder Models
Figure 3 for LegoNN: Building Modular Encoder-Decoder Models
Figure 4 for LegoNN: Building Modular Encoder-Decoder Models
Viaarxiv icon

CM3: A Causal Masked Multimodal Model of the Internet

Add code
Bookmark button
Alert button
Jan 19, 2022
Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer

Figure 1 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 2 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 3 for CM3: A Causal Masked Multimodal Model of the Internet
Figure 4 for CM3: A Causal Masked Multimodal Model of the Internet
Viaarxiv icon

The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus

Add code
Bookmark button
Alert button
Dec 18, 2021
Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel

Figure 1 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Figure 2 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Figure 3 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Figure 4 for The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus
Viaarxiv icon

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

Add code
Bookmark button
Alert button
Oct 14, 2021
Patrick Huber, Armen Aghajanyan, Barlas Oğuz, Dmytro Okhonko, Wen-tau Yih, Sonal Gupta, Xilun Chen

Figure 1 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Figure 2 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Figure 3 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Figure 4 for CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Viaarxiv icon

VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding

Add code
Bookmark button
Alert button
Oct 01, 2021
Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer

Figure 1 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 2 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 3 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Figure 4 for VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Viaarxiv icon

HTLM: Hyper-Text Pre-Training and Prompting of Language Models

Add code
Bookmark button
Alert button
Jul 14, 2021
Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer

Figure 1 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 2 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 3 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Figure 4 for HTLM: Hyper-Text Pre-Training and Prompting of Language Models
Viaarxiv icon

NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned

Add code
Bookmark button
Alert button
Jan 01, 2021
Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen-tau Yih

Figure 1 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Figure 2 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Figure 3 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Figure 4 for NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Viaarxiv icon

Unified Open-Domain Question Answering with Structured and Unstructured Knowledge

Add code
Bookmark button
Alert button
Dec 29, 2020
Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Scott Yih

Figure 1 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 2 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 3 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Figure 4 for Unified Open-Domain Question Answering with Structured and Unstructured Knowledge
Viaarxiv icon

fairseq S2T: Fast Speech-to-Text Modeling with fairseq

Add code
Bookmark button
Alert button
Oct 11, 2020
Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino

Figure 1 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Figure 2 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Figure 3 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Figure 4 for fairseq S2T: Fast Speech-to-Text Modeling with fairseq
Viaarxiv icon

Training ASR models by Generation of Contextual Information

Add code
Bookmark button
Alert button
Oct 27, 2019
Kritika Singh, Dmytro Okhonko, Jun Liu, Yongqiang Wang, Frank Zhang, Ross Girshick, Sergey Edunov, Fuchun Peng, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed

Figure 1 for Training ASR models by Generation of Contextual Information
Figure 2 for Training ASR models by Generation of Contextual Information
Figure 3 for Training ASR models by Generation of Contextual Information
Figure 4 for Training ASR models by Generation of Contextual Information
Viaarxiv icon