Alert button
Picture for William Yang Wang

William Yang Wang

Alert button

Investigating African-American Vernacular English in Transformer-Based Text Generation

Add code
Bookmark button
Alert button
Oct 29, 2020
Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, William Yang Wang

Figure 1 for Investigating African-American Vernacular English in Transformer-Based Text Generation
Figure 2 for Investigating African-American Vernacular English in Transformer-Based Text Generation
Figure 3 for Investigating African-American Vernacular English in Transformer-Based Text Generation
Figure 4 for Investigating African-American Vernacular English in Transformer-Based Text Generation
Viaarxiv icon

Unsupervised Multi-hop Question Answering by Question Generation

Add code
Bookmark button
Alert button
Oct 23, 2020
Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang

Figure 1 for Unsupervised Multi-hop Question Answering by Question Generation
Figure 2 for Unsupervised Multi-hop Question Answering by Question Generation
Figure 3 for Unsupervised Multi-hop Question Answering by Question Generation
Figure 4 for Unsupervised Multi-hop Question Answering by Question Generation
Viaarxiv icon

Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation

Add code
Bookmark button
Alert button
Oct 18, 2020
Jiannan Xiang, Xin Eric Wang, William Yang Wang

Figure 1 for Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation
Figure 2 for Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation
Figure 3 for Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation
Figure 4 for Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation
Viaarxiv icon

KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation

Add code
Bookmark button
Alert button
Oct 11, 2020
Wenhu Chen, Yu Su, Xifeng Yan, William Yang Wang

Figure 1 for KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation
Figure 2 for KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation
Figure 3 for KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation
Figure 4 for KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation
Viaarxiv icon

Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations

Add code
Bookmark button
Alert button
Oct 07, 2020
Wanrong Zhu, Xin Eric Wang, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang

Figure 1 for Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
Figure 2 for Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
Figure 3 for Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
Figure 4 for Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations
Viaarxiv icon

SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning

Add code
Bookmark button
Alert button
Sep 29, 2020
Tsu-Jui Fu, Xin Eric Wang, Scott Grafton, Miguel Eckstein, William Yang Wang

Figure 1 for SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning
Figure 2 for SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning
Figure 3 for SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning
Figure 4 for SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning
Viaarxiv icon

Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval

Add code
Bookmark button
Alert button
Sep 27, 2020
Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, Barlas Oğuz

Figure 1 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Figure 2 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Figure 3 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Figure 4 for Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Viaarxiv icon