Alert button
Picture for Ming-Wei Chang

Ming-Wei Chang

Alert button

Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations

Add code
Bookmark button
Alert button
Apr 15, 2021
Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang

Figure 1 for Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations
Figure 2 for Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations
Figure 3 for Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations
Figure 4 for Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations
Viaarxiv icon

CapWAP: Captioning with a Purpose

Add code
Bookmark button
Alert button
Nov 09, 2020
Adam Fisch, Kenton Lee, Ming-Wei Chang, Jonathan H. Clark, Regina Barzilay

Figure 1 for CapWAP: Captioning with a Purpose
Figure 2 for CapWAP: Captioning with a Purpose
Figure 3 for CapWAP: Captioning with a Purpose
Figure 4 for CapWAP: Captioning with a Purpose
Viaarxiv icon

Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?

Add code
Bookmark button
Alert button
Oct 24, 2020
Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova

Figure 1 for Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?
Figure 2 for Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?
Figure 3 for Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?
Figure 4 for Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?
Viaarxiv icon

Open Question Answering over Tables and Text

Add code
Bookmark button
Alert button
Oct 20, 2020
Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, William W. Cohen

Figure 1 for Open Question Answering over Tables and Text
Figure 2 for Open Question Answering over Tables and Text
Figure 3 for Open Question Answering over Tables and Text
Figure 4 for Open Question Answering over Tables and Text
Viaarxiv icon

Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering

Add code
Bookmark button
Alert button
May 05, 2020
Hao Cheng, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

Figure 1 for Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
Figure 2 for Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
Figure 3 for Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
Figure 4 for Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
Viaarxiv icon

REALM: Retrieval-Augmented Language Model Pre-Training

Add code
Bookmark button
Alert button
Feb 10, 2020
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Figure 1 for REALM: Retrieval-Augmented Language Model Pre-Training
Figure 2 for REALM: Retrieval-Augmented Language Model Pre-Training
Figure 3 for REALM: Retrieval-Augmented Language Model Pre-Training
Figure 4 for REALM: Retrieval-Augmented Language Model Pre-Training
Viaarxiv icon

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

Add code
Bookmark button
Alert button
Sep 25, 2019
Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

Figure 1 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Figure 2 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Figure 3 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Figure 4 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Viaarxiv icon

Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation

Add code
Bookmark button
Alert button
Aug 23, 2019
Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

Figure 1 for Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation
Figure 2 for Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation
Figure 3 for Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation
Figure 4 for Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation
Viaarxiv icon

Zero-Shot Entity Linking by Reading Entity Descriptions

Add code
Bookmark button
Alert button
Jun 18, 2019
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

Figure 1 for Zero-Shot Entity Linking by Reading Entity Descriptions
Figure 2 for Zero-Shot Entity Linking by Reading Entity Descriptions
Figure 3 for Zero-Shot Entity Linking by Reading Entity Descriptions
Figure 4 for Zero-Shot Entity Linking by Reading Entity Descriptions
Viaarxiv icon

Latent Retrieval for Weakly Supervised Open Domain Question Answering

Add code
Bookmark button
Alert button
Jun 06, 2019
Kenton Lee, Ming-Wei Chang, Kristina Toutanova

Figure 1 for Latent Retrieval for Weakly Supervised Open Domain Question Answering
Figure 2 for Latent Retrieval for Weakly Supervised Open Domain Question Answering
Figure 3 for Latent Retrieval for Weakly Supervised Open Domain Question Answering
Figure 4 for Latent Retrieval for Weakly Supervised Open Domain Question Answering
Viaarxiv icon