Alert button
Picture for John Wieting

John Wieting

Alert button

QA Is the New KR: Question-Answer Pairs as Knowledge Bases

Add code
Bookmark button
Alert button
Jul 01, 2022
Wenhu Chen, William W. Cohen, Michiel De Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting

Figure 1 for QA Is the New KR: Question-Answer Pairs as Knowledge Bases
Figure 2 for QA Is the New KR: Question-Answer Pairs as Knowledge Bases
Figure 3 for QA Is the New KR: Question-Answer Pairs as Knowledge Bases
Figure 4 for QA Is the New KR: Question-Answer Pairs as Knowledge Bases
Viaarxiv icon

RankGen: Improving Text Generation with Large Ranking Models

Add code
Bookmark button
Alert button
May 19, 2022
Kalpesh Krishna, Yapei Chang, John Wieting, Mohit Iyyer

Figure 1 for RankGen: Improving Text Generation with Large Ranking Models
Figure 2 for RankGen: Improving Text Generation with Large Ranking Models
Figure 3 for RankGen: Improving Text Generation with Large Ranking Models
Figure 4 for RankGen: Improving Text Generation with Large Ranking Models
Viaarxiv icon

Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization

Add code
Bookmark button
Alert button
Apr 28, 2022
Yue Dong, John Wieting, Pat Verga

Figure 1 for Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization
Figure 2 for Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization
Figure 3 for Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization
Figure 4 for Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization
Viaarxiv icon

Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

Add code
Bookmark button
Alert button
Apr 10, 2022
Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, William Cohen

Figure 1 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 2 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 3 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Figure 4 for Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
Viaarxiv icon

Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs

Add code
Bookmark button
Alert button
Oct 25, 2021
Monisha Jegadeesan, Sachin Kumar, John Wieting, Yulia Tsvetkov

Figure 1 for Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs
Figure 2 for Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs
Figure 3 for Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs
Figure 4 for Improving the Diversity of Unsupervised Paraphrasing with Embedding Outputs
Viaarxiv icon

On The Ingredients of an Effective Zero-shot Semantic Parser

Add code
Bookmark button
Alert button
Oct 15, 2021
Pengcheng Yin, John Wieting, Avirup Sil, Graham Neubig

Figure 1 for On The Ingredients of an Effective Zero-shot Semantic Parser
Figure 2 for On The Ingredients of an Effective Zero-shot Semantic Parser
Figure 3 for On The Ingredients of an Effective Zero-shot Semantic Parser
Figure 4 for On The Ingredients of an Effective Zero-shot Semantic Parser
Viaarxiv icon

Paraphrastic Representations at Scale

Add code
Bookmark button
Alert button
Apr 30, 2021
John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick

Figure 1 for Paraphrastic Representations at Scale
Figure 2 for Paraphrastic Representations at Scale
Figure 3 for Paraphrastic Representations at Scale
Figure 4 for Paraphrastic Representations at Scale
Viaarxiv icon

CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation

Add code
Bookmark button
Alert button
Mar 31, 2021
Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting

Figure 1 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Figure 2 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Figure 3 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Figure 4 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Viaarxiv icon