Alert button
Picture for Dan Gutfreund

Dan Gutfreund

Alert button

AGENT: A Benchmark for Core Psychological Reasoning

Add code
Bookmark button
Alert button
Feb 25, 2021
Tianmin Shu, Abhishek Bhandwaldar, Chuang Gan, Kevin A. Smith, Shari Liu, Dan Gutfreund, Elizabeth Spelke, Joshua B. Tenenbaum, Tomer D. Ullman

Figure 1 for AGENT: A Benchmark for Core Psychological Reasoning
Figure 2 for AGENT: A Benchmark for Core Psychological Reasoning
Figure 3 for AGENT: A Benchmark for Core Psychological Reasoning
Figure 4 for AGENT: A Benchmark for Core Psychological Reasoning
Viaarxiv icon

Template Controllable keywords-to-text Generation

Add code
Bookmark button
Alert button
Nov 07, 2020
Abhijit Mishra, Md Faisal Mahbub Chowdhury, Sagar Manohar, Dan Gutfreund, Karthik Sankaranarayanan

Figure 1 for Template Controllable keywords-to-text Generation
Figure 2 for Template Controllable keywords-to-text Generation
Figure 3 for Template Controllable keywords-to-text Generation
Figure 4 for Template Controllable keywords-to-text Generation
Viaarxiv icon

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

Add code
Bookmark button
Alert button
Oct 25, 2020
Akash Srivastava, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

Figure 1 for Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
Figure 2 for Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
Figure 3 for Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
Figure 4 for Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling
Viaarxiv icon

ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation

Add code
Bookmark button
Alert button
Jul 09, 2020
Chuang Gan, Jeremy Schwartz, Seth Alter, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias Wang, Damian Mrowca, Michael Lingelbach, Aidan Curtis, Kevin Feigelis, Daniel M. Bear, Dan Gutfreund, David Cox, James J. DiCarlo, Josh McDermott, Joshua B. Tenenbaum, Daniel L. K. Yamins

Figure 1 for ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation
Figure 2 for ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation
Figure 3 for ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation
Figure 4 for ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation
Viaarxiv icon

SimVAE: Simulator-Assisted Training forInterpretable Generative Models

Add code
Bookmark button
Alert button
Nov 19, 2019
Akash Srivastava, Jessie Rosenberg, Dan Gutfreund, David D. Cox

Figure 1 for SimVAE: Simulator-Assisted Training forInterpretable Generative Models
Figure 2 for SimVAE: Simulator-Assisted Training forInterpretable Generative Models
Figure 3 for SimVAE: Simulator-Assisted Training forInterpretable Generative Models
Figure 4 for SimVAE: Simulator-Assisted Training forInterpretable Generative Models
Viaarxiv icon

Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding

Add code
Bookmark button
Alert button
Nov 04, 2019
Mathew Monfort, Kandan Ramakrishnan, Alex Andonian, Barry A McNamara, Alex Lascelles, Bowen Pan, Quanfu Fan, Dan Gutfreund, Rogerio Feris, Aude Oliva

Figure 1 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Figure 2 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Figure 3 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Figure 4 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Viaarxiv icon

Reasoning About Human-Object Interactions Through Dual Attention Networks

Add code
Bookmark button
Alert button
Sep 10, 2019
Tete Xiao, Quanfu Fan, Dan Gutfreund, Mathew Monfort, Aude Oliva, Bolei Zhou

Figure 1 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Figure 2 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Figure 3 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Figure 4 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Viaarxiv icon