Alert button
Picture for Michael J. Tarr

Michael J. Tarr

Alert button

Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models

Add code
Bookmark button
Alert button
Oct 23, 2023
Gabriel Sarch, Yue Wu, Michael J. Tarr, Katerina Fragkiadaki

Viaarxiv icon

BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity

Add code
Bookmark button
Alert button
Oct 06, 2023
Andrew F. Luo, Margaret M. Henderson, Michael J. Tarr, Leila Wehbe

Viaarxiv icon

Thinking Like an Annotator: Generation of Dataset Labeling Instructions

Add code
Bookmark button
Alert button
Jun 24, 2023
Nadine Chang, Francesco Ferroni, Michael J. Tarr, Martial Hebert, Deva Ramanan

Viaarxiv icon

Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models

Add code
Bookmark button
Alert button
Jun 05, 2023
Andrew F. Luo, Margaret M. Henderson, Leila Wehbe, Michael J. Tarr

Figure 1 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Figure 2 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Figure 3 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Figure 4 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Viaarxiv icon

Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition

Add code
Bookmark button
Alert button
Apr 05, 2023
Yuchen Zhou, Michael J. Tarr, Daniel Yurovsky

Figure 1 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Figure 2 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Figure 3 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Figure 4 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Viaarxiv icon

TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors

Add code
Bookmark button
Alert button
Jul 21, 2022
Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, Katerina Fragkiadaki

Figure 1 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Figure 2 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Figure 3 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Figure 4 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Viaarxiv icon

Learning Neural Acoustic Fields

Add code
Bookmark button
Alert button
Apr 04, 2022
Andrew Luo, Yilun Du, Michael J. Tarr, Joshua B. Tenenbaum, Antonio Torralba, Chuang Gan

Figure 1 for Learning Neural Acoustic Fields
Figure 2 for Learning Neural Acoustic Fields
Figure 3 for Learning Neural Acoustic Fields
Figure 4 for Learning Neural Acoustic Fields
Viaarxiv icon

Alpha Net: Adaptation with Composition in Classifier Space

Add code
Bookmark button
Alert button
Aug 17, 2020
Nadine Chang, Jayanth Koushik, Michael J. Tarr, Martial Hebert, Yu-Xiong Wang

Figure 1 for Alpha Net: Adaptation with Composition in Classifier Space
Figure 2 for Alpha Net: Adaptation with Composition in Classifier Space
Figure 3 for Alpha Net: Adaptation with Composition in Classifier Space
Figure 4 for Alpha Net: Adaptation with Composition in Classifier Space
Viaarxiv icon

Learning Intermediate Features of Object Affordances with a Convolutional Neural Network

Add code
Bookmark button
Alert button
Feb 20, 2020
Aria Yuan Wang, Michael J. Tarr

Figure 1 for Learning Intermediate Features of Object Affordances with a Convolutional Neural Network
Figure 2 for Learning Intermediate Features of Object Affordances with a Convolutional Neural Network
Figure 3 for Learning Intermediate Features of Object Affordances with a Convolutional Neural Network
Figure 4 for Learning Intermediate Features of Object Affordances with a Convolutional Neural Network
Viaarxiv icon

BOLD5000: A public fMRI dataset of 5000 images

Add code
Bookmark button
Alert button
Sep 05, 2018
Nadine Chang, John A. Pyles, Abhinav Gupta, Michael J. Tarr, Elissa M. Aminoff

Figure 1 for BOLD5000: A public fMRI dataset of 5000 images
Figure 2 for BOLD5000: A public fMRI dataset of 5000 images
Figure 3 for BOLD5000: A public fMRI dataset of 5000 images
Figure 4 for BOLD5000: A public fMRI dataset of 5000 images
Viaarxiv icon