Alert button
Picture for Yonatan Bisk

Yonatan Bisk

Alert button

Multi-View Learning for Vision-and-Language Navigation

Add code
Bookmark button
Alert button
Mar 03, 2020
Qiaolin Xia, Xiujun Li, Chunyuan Li, Yonatan Bisk, Zhifang Sui, Jianfeng Gao, Yejin Choi, Noah A. Smith

Figure 1 for Multi-View Learning for Vision-and-Language Navigation
Figure 2 for Multi-View Learning for Vision-and-Language Navigation
Figure 3 for Multi-View Learning for Vision-and-Language Navigation
Figure 4 for Multi-View Learning for Vision-and-Language Navigation
Viaarxiv icon

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

Add code
Bookmark button
Alert button
Dec 03, 2019
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox

Figure 1 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 2 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 3 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Figure 4 for ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
Viaarxiv icon

PIQA: Reasoning about Physical Commonsense in Natural Language

Add code
Bookmark button
Alert button
Nov 26, 2019
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi

Figure 1 for PIQA: Reasoning about Physical Commonsense in Natural Language
Figure 2 for PIQA: Reasoning about Physical Commonsense in Natural Language
Figure 3 for PIQA: Reasoning about Physical Commonsense in Natural Language
Figure 4 for PIQA: Reasoning about Physical Commonsense in Natural Language
Viaarxiv icon

Robust Navigation with Language Pretraining and Stochastic Sampling

Add code
Bookmark button
Alert button
Sep 05, 2019
Xiujun Li, Chunyuan Li, Qiaolin Xia, Yonatan Bisk, Asli Celikyilmaz, Jianfeng Gao, Noah Smith, Yejin Choi

Figure 1 for Robust Navigation with Language Pretraining and Stochastic Sampling
Figure 2 for Robust Navigation with Language Pretraining and Stochastic Sampling
Figure 3 for Robust Navigation with Language Pretraining and Stochastic Sampling
Figure 4 for Robust Navigation with Language Pretraining and Stochastic Sampling
Viaarxiv icon

Defending Against Neural Fake News

Add code
Bookmark button
Alert button
May 29, 2019
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi

Figure 1 for Defending Against Neural Fake News
Figure 2 for Defending Against Neural Fake News
Figure 3 for Defending Against Neural Fake News
Figure 4 for Defending Against Neural Fake News
Viaarxiv icon

HellaSwag: Can a Machine Really Finish Your Sentence?

Add code
Bookmark button
Alert button
May 19, 2019
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi

Figure 1 for HellaSwag: Can a Machine Really Finish Your Sentence?
Figure 2 for HellaSwag: Can a Machine Really Finish Your Sentence?
Figure 3 for HellaSwag: Can a Machine Really Finish Your Sentence?
Figure 4 for HellaSwag: Can a Machine Really Finish Your Sentence?
Viaarxiv icon

Improving Robot Success Detection using Static Object Data

Add code
Bookmark button
Alert button
Apr 02, 2019
Rosario Scalise, Jesse Thomason, Yonatan Bisk, Siddhartha Srinivasa

Figure 1 for Improving Robot Success Detection using Static Object Data
Figure 2 for Improving Robot Success Detection using Static Object Data
Figure 3 for Improving Robot Success Detection using Static Object Data
Figure 4 for Improving Robot Success Detection using Static Object Data
Viaarxiv icon

Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation

Add code
Bookmark button
Alert button
Apr 02, 2019
Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa

Figure 1 for Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
Figure 2 for Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
Figure 3 for Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
Figure 4 for Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation
Viaarxiv icon

Prospection: Interpretable Plans From Language By Predicting the Future

Add code
Bookmark button
Alert button
Mar 20, 2019
Chris Paxton, Yonatan Bisk, Jesse Thomason, Arunkumar Byravan, Dieter Fox

Figure 1 for Prospection: Interpretable Plans From Language By Predicting the Future
Figure 2 for Prospection: Interpretable Plans From Language By Predicting the Future
Figure 3 for Prospection: Interpretable Plans From Language By Predicting the Future
Figure 4 for Prospection: Interpretable Plans From Language By Predicting the Future
Viaarxiv icon

Character-based Surprisal as a Model of Human Reading in the Presence of Errors

Add code
Bookmark button
Alert button
Feb 02, 2019
Michael Hahn, Frank Keller, Yonatan Bisk, Yonatan Belinkov

Figure 1 for Character-based Surprisal as a Model of Human Reading in the Presence of Errors
Figure 2 for Character-based Surprisal as a Model of Human Reading in the Presence of Errors
Figure 3 for Character-based Surprisal as a Model of Human Reading in the Presence of Errors
Figure 4 for Character-based Surprisal as a Model of Human Reading in the Presence of Errors
Viaarxiv icon