Alert button
Picture for William Yang Wang

William Yang Wang

Alert button

Local Explanation of Dialogue Response Generation

Jun 11, 2021
Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang

Figure 1 for Local Explanation of Dialogue Response Generation
Figure 2 for Local Explanation of Dialogue Response Generation
Figure 3 for Local Explanation of Dialogue Response Generation
Figure 4 for Local Explanation of Dialogue Response Generation
Viaarxiv icon

ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation

Jun 10, 2021
Wanrong Zhu, Xin Eric Wang, An Yan, Miguel Eckstein, William Yang Wang

Figure 1 for ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation
Figure 2 for ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation
Figure 3 for ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation
Figure 4 for ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation
Viaarxiv icon

VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation

Jun 08, 2021
Linjie Li, Jie Lei, Zhe Gan, Licheng Yu, Yen-Chun Chen, Rohit Pillai, Yu Cheng, Luowei Zhou, Xin Eric Wang, William Yang Wang, Tamara Lee Berg, Mohit Bansal, Jingjing Liu, Lijuan Wang, Zicheng Liu

Figure 1 for VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation
Figure 2 for VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation
Figure 3 for VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation
Figure 4 for VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation
Viaarxiv icon

Counterfactual Maximum Likelihood Estimation for Training Deep Networks

Jun 07, 2021
Xinyi Wang, Wenhu Chen, Michael Saxon, William Yang Wang

Figure 1 for Counterfactual Maximum Likelihood Estimation for Training Deep Networks
Figure 2 for Counterfactual Maximum Likelihood Estimation for Training Deep Networks
Figure 3 for Counterfactual Maximum Likelihood Estimation for Training Deep Networks
Figure 4 for Counterfactual Maximum Likelihood Estimation for Training Deep Networks
Viaarxiv icon

Language-Driven Image Style Transfer

Jun 01, 2021
Tsu-Jui Fu, Xin Eric Wang, William Yang Wang

Figure 1 for Language-Driven Image Style Transfer
Figure 2 for Language-Driven Image Style Transfer
Figure 3 for Language-Driven Image Style Transfer
Figure 4 for Language-Driven Image Style Transfer
Viaarxiv icon

Zero-shot Fact Verification by Claim Generation

May 31, 2021
Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang

Figure 1 for Zero-shot Fact Verification by Claim Generation
Figure 2 for Zero-shot Fact Verification by Claim Generation
Figure 3 for Zero-shot Fact Verification by Claim Generation
Figure 4 for Zero-shot Fact Verification by Claim Generation
Viaarxiv icon

Comparing Visual Reasoning in Humans and AI

Apr 29, 2021
Shravan Murlidaran, William Yang Wang, Miguel P. Eckstein

Figure 1 for Comparing Visual Reasoning in Humans and AI
Figure 2 for Comparing Visual Reasoning in Humans and AI
Figure 3 for Comparing Visual Reasoning in Humans and AI
Figure 4 for Comparing Visual Reasoning in Humans and AI
Viaarxiv icon

Gaze Perception in Humans and CNN-Based Model

Apr 17, 2021
Nicole X. Han, William Yang Wang, Miguel P. Eckstein

Figure 1 for Gaze Perception in Humans and CNN-Based Model
Figure 2 for Gaze Perception in Humans and CNN-Based Model
Figure 3 for Gaze Perception in Humans and CNN-Based Model
Figure 4 for Gaze Perception in Humans and CNN-Based Model
Viaarxiv icon

Language-based Video Editing via Multi-Modal Multi-Level Transformer

Apr 02, 2021
Tsu-Jui Fu, Xin Eric Wang, Scott T. Grafton, Miguel P. Eckstein, William Yang Wang

Figure 1 for Language-based Video Editing via Multi-Modal Multi-Level Transformer
Figure 2 for Language-based Video Editing via Multi-Modal Multi-Level Transformer
Figure 3 for Language-based Video Editing via Multi-Modal Multi-Level Transformer
Figure 4 for Language-based Video Editing via Multi-Modal Multi-Level Transformer
Viaarxiv icon

Diagnosing Vision-and-Language Navigation: What Really Matters

Mar 30, 2021
Wanrong Zhu, Yuankai Qi, Pradyumna Narayana, Kazoo Sone, Sugato Basu, Xin Eric Wang, Qi Wu, Miguel Eckstein, William Yang Wang

Figure 1 for Diagnosing Vision-and-Language Navigation: What Really Matters
Figure 2 for Diagnosing Vision-and-Language Navigation: What Really Matters
Figure 3 for Diagnosing Vision-and-Language Navigation: What Really Matters
Figure 4 for Diagnosing Vision-and-Language Navigation: What Really Matters
Viaarxiv icon