Alert button
Picture for Omer Levy

Omer Levy

Alert button

Aligned Cross Entropy for Non-Autoregressive Machine Translation

Add code
Bookmark button
Alert button
Apr 03, 2020
Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy

Figure 1 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Figure 2 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Figure 3 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Figure 4 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Viaarxiv icon

Semi-Autoregressive Training Improves Mask-Predict Decoding

Add code
Bookmark button
Alert button
Jan 23, 2020
Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer

Figure 1 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Figure 2 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Figure 3 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Figure 4 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Viaarxiv icon

Improving Transformer Models by Reordering their Sublayers

Add code
Bookmark button
Alert button
Nov 10, 2019
Ofir Press, Noah A. Smith, Omer Levy

Figure 1 for Improving Transformer Models by Reordering their Sublayers
Figure 2 for Improving Transformer Models by Reordering their Sublayers
Figure 3 for Improving Transformer Models by Reordering their Sublayers
Figure 4 for Improving Transformer Models by Reordering their Sublayers
Viaarxiv icon

Blockwise Self-Attention for Long Document Understanding

Add code
Bookmark button
Alert button
Nov 07, 2019
Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, Jie Tang

Figure 1 for Blockwise Self-Attention for Long Document Understanding
Figure 2 for Blockwise Self-Attention for Long Document Understanding
Figure 3 for Blockwise Self-Attention for Long Document Understanding
Figure 4 for Blockwise Self-Attention for Long Document Understanding
Viaarxiv icon

Generalization through Memorization: Nearest Neighbor Language Models

Add code
Bookmark button
Alert button
Nov 01, 2019
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis

Figure 1 for Generalization through Memorization: Nearest Neighbor Language Models
Figure 2 for Generalization through Memorization: Nearest Neighbor Language Models
Figure 3 for Generalization through Memorization: Nearest Neighbor Language Models
Figure 4 for Generalization through Memorization: Nearest Neighbor Language Models
Viaarxiv icon

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Add code
Bookmark button
Alert button
Oct 29, 2019
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

Figure 1 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 2 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 3 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Figure 4 for BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Viaarxiv icon

Structural Language Models for Any-Code Generation

Add code
Bookmark button
Alert button
Sep 30, 2019
Uri Alon, Roy Sadaka, Omer Levy, Eran Yahav

Figure 1 for Structural Language Models for Any-Code Generation
Figure 2 for Structural Language Models for Any-Code Generation
Figure 3 for Structural Language Models for Any-Code Generation
Figure 4 for Structural Language Models for Any-Code Generation
Viaarxiv icon

BERT for Coreference Resolution: Baselines and Analysis

Add code
Bookmark button
Alert button
Sep 01, 2019
Mandar Joshi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer

Figure 1 for BERT for Coreference Resolution: Baselines and Analysis
Figure 2 for BERT for Coreference Resolution: Baselines and Analysis
Figure 3 for BERT for Coreference Resolution: Baselines and Analysis
Figure 4 for BERT for Coreference Resolution: Baselines and Analysis
Viaarxiv icon

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Add code
Bookmark button
Alert button
Jul 31, 2019
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy

Figure 1 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 2 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 3 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 4 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Viaarxiv icon

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Add code
Bookmark button
Alert button
Jul 26, 2019
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov

Figure 1 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 2 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 3 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Figure 4 for RoBERTa: A Robustly Optimized BERT Pretraining Approach
Viaarxiv icon