Picture for Michael J. Tarr

Michael J. Tarr

ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights

Add code
Jun 20, 2024
Figure 1 for ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights
Figure 2 for ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights
Figure 3 for ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights
Figure 4 for ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights
Viaarxiv icon

StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images

Add code
Jun 19, 2024
Figure 1 for StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images
Figure 2 for StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images
Figure 3 for StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images
Figure 4 for StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images
Viaarxiv icon

Neural Representations of Dynamic Visual Stimuli

Add code
Jun 04, 2024
Viaarxiv icon

HELPER-X: A Unified Instructable Embodied Agent to Tackle Four Interactive Vision-Language Domains with Memory-Augmented Language Models

Add code
Apr 29, 2024
Figure 1 for HELPER-X: A Unified Instructable Embodied Agent to Tackle Four Interactive Vision-Language Domains with Memory-Augmented Language Models
Figure 2 for HELPER-X: A Unified Instructable Embodied Agent to Tackle Four Interactive Vision-Language Domains with Memory-Augmented Language Models
Figure 3 for HELPER-X: A Unified Instructable Embodied Agent to Tackle Four Interactive Vision-Language Domains with Memory-Augmented Language Models
Figure 4 for HELPER-X: A Unified Instructable Embodied Agent to Tackle Four Interactive Vision-Language Domains with Memory-Augmented Language Models
Viaarxiv icon

Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models

Add code
Oct 23, 2023
Viaarxiv icon

BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity

Add code
Oct 06, 2023
Viaarxiv icon

Thinking Like an Annotator: Generation of Dataset Labeling Instructions

Add code
Jun 24, 2023
Viaarxiv icon

Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models

Add code
Jun 05, 2023
Figure 1 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Figure 2 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Figure 3 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Figure 4 for Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Viaarxiv icon

Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition

Add code
Apr 05, 2023
Figure 1 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Figure 2 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Figure 3 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Figure 4 for Quantifying the Roles of Visual, Linguistic, and Visual-Linguistic Complexity in Verb Acquisition
Viaarxiv icon

TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors

Add code
Jul 21, 2022
Figure 1 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Figure 2 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Figure 3 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Figure 4 for TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
Viaarxiv icon