Alert button
Picture for Luke Zettlemoyer

Luke Zettlemoyer

Alert button

Pre-training via Paraphrasing

Add code
Bookmark button
Alert button
Jun 26, 2020
Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer

Figure 1 for Pre-training via Paraphrasing
Figure 2 for Pre-training via Paraphrasing
Figure 3 for Pre-training via Paraphrasing
Figure 4 for Pre-training via Paraphrasing
Viaarxiv icon

Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders

Add code
Bookmark button
Alert button
Jun 02, 2020
Terra Blevins, Luke Zettlemoyer

Figure 1 for Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders
Figure 2 for Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders
Figure 3 for Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders
Figure 4 for Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders
Viaarxiv icon

Active Learning for Coreference Resolution using Discrete Annotation

Add code
Bookmark button
Alert button
May 19, 2020
Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer

Figure 1 for Active Learning for Coreference Resolution using Discrete Annotation
Figure 2 for Active Learning for Coreference Resolution using Discrete Annotation
Figure 3 for Active Learning for Coreference Resolution using Discrete Annotation
Figure 4 for Active Learning for Coreference Resolution using Discrete Annotation
Viaarxiv icon

An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction

Add code
Bookmark button
Alert button
May 01, 2020
Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, Luke Zettlemoyer

Figure 1 for An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Figure 2 for An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Figure 3 for An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Figure 4 for An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Viaarxiv icon

AmbigQA: Answering Ambiguous Open-domain Questions

Add code
Bookmark button
Alert button
Apr 22, 2020
Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer

Figure 1 for AmbigQA: Answering Ambiguous Open-domain Questions
Figure 2 for AmbigQA: Answering Ambiguous Open-domain Questions
Figure 3 for AmbigQA: Answering Ambiguous Open-domain Questions
Figure 4 for AmbigQA: Answering Ambiguous Open-domain Questions
Viaarxiv icon

Aligned Cross Entropy for Non-Autoregressive Machine Translation

Add code
Bookmark button
Alert button
Apr 03, 2020
Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy

Figure 1 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Figure 2 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Figure 3 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Figure 4 for Aligned Cross Entropy for Non-Autoregressive Machine Translation
Viaarxiv icon

Semi-Autoregressive Training Improves Mask-Predict Decoding

Add code
Bookmark button
Alert button
Jan 23, 2020
Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer

Figure 1 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Figure 2 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Figure 3 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Figure 4 for Semi-Autoregressive Training Improves Mask-Predict Decoding
Viaarxiv icon

Multilingual Denoising Pre-training for Neural Machine Translation

Add code
Bookmark button
Alert button
Jan 23, 2020
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer

Figure 1 for Multilingual Denoising Pre-training for Neural Machine Translation
Figure 2 for Multilingual Denoising Pre-training for Neural Machine Translation
Figure 3 for Multilingual Denoising Pre-training for Neural Machine Translation
Figure 4 for Multilingual Denoising Pre-training for Neural Machine Translation
Viaarxiv icon