Alert button
Picture for Omer Levy

Omer Levy

Alert button

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Add code
Bookmark button
Alert button
Jul 24, 2019
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy

Figure 1 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 2 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 3 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Figure 4 for SpanBERT: Improving Pre-training by Representing and Predicting Spans
Viaarxiv icon

What Does BERT Look At? An Analysis of BERT's Attention

Add code
Bookmark button
Alert button
Jun 11, 2019
Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning

Figure 1 for What Does BERT Look At? An Analysis of BERT's Attention
Figure 2 for What Does BERT Look At? An Analysis of BERT's Attention
Figure 3 for What Does BERT Look At? An Analysis of BERT's Attention
Figure 4 for What Does BERT Look At? An Analysis of BERT's Attention
Viaarxiv icon

Are Sixteen Heads Really Better than One?

Add code
Bookmark button
Alert button
May 25, 2019
Paul Michel, Omer Levy, Graham Neubig

Figure 1 for Are Sixteen Heads Really Better than One?
Figure 2 for Are Sixteen Heads Really Better than One?
Figure 3 for Are Sixteen Heads Really Better than One?
Figure 4 for Are Sixteen Heads Really Better than One?
Viaarxiv icon

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

Add code
Bookmark button
Alert button
May 02, 2019
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

Figure 1 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Figure 2 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Figure 3 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Figure 4 for SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Viaarxiv icon

Constant-Time Machine Translation with Conditional Masked Language Models

Add code
Bookmark button
Alert button
Apr 19, 2019
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer

Figure 1 for Constant-Time Machine Translation with Conditional Masked Language Models
Figure 2 for Constant-Time Machine Translation with Conditional Masked Language Models
Figure 3 for Constant-Time Machine Translation with Conditional Masked Language Models
Figure 4 for Constant-Time Machine Translation with Conditional Masked Language Models
Viaarxiv icon

Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation

Add code
Bookmark button
Alert button
Feb 05, 2019
Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, Marjan Ghazvininejad

Figure 1 for Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation
Figure 2 for Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation
Figure 3 for Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation
Figure 4 for Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation
Viaarxiv icon

code2vec: Learning Distributed Representations of Code

Add code
Bookmark button
Alert button
Oct 30, 2018
Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav

Figure 1 for code2vec: Learning Distributed Representations of Code
Figure 2 for code2vec: Learning Distributed Representations of Code
Figure 3 for code2vec: Learning Distributed Representations of Code
Figure 4 for code2vec: Learning Distributed Representations of Code
Viaarxiv icon

pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference

Add code
Bookmark button
Alert button
Oct 20, 2018
Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer

Figure 1 for pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
Figure 2 for pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
Figure 3 for pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
Figure 4 for pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
Viaarxiv icon

code2seq: Generating Sequences from Structured Representations of Code

Add code
Bookmark button
Alert button
Oct 10, 2018
Uri Alon, Omer Levy, Eran Yahav

Figure 1 for code2seq: Generating Sequences from Structured Representations of Code
Figure 2 for code2seq: Generating Sequences from Structured Representations of Code
Figure 3 for code2seq: Generating Sequences from Structured Representations of Code
Figure 4 for code2seq: Generating Sequences from Structured Representations of Code
Viaarxiv icon

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

Add code
Bookmark button
Alert button
Sep 18, 2018
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R. Bowman

Figure 1 for GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Figure 2 for GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Figure 3 for GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Figure 4 for GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Viaarxiv icon