Picture for Mengmi Zhang

Mengmi Zhang

Label-Efficient Online Continual Object Detection in Streaming Video

Add code
Jun 01, 2022
Figure 1 for Label-Efficient Online Continual Object Detection in Streaming Video
Figure 2 for Label-Efficient Online Continual Object Detection in Streaming Video
Figure 3 for Label-Efficient Online Continual Object Detection in Streaming Video
Figure 4 for Label-Efficient Online Continual Object Detection in Streaming Video
Viaarxiv icon

Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases

Add code
Jun 05, 2021
Figure 1 for Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
Figure 2 for Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
Figure 3 for Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
Figure 4 for Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
Viaarxiv icon

Hypothesis-driven Stream Learning with Augmented Memory

Add code
Apr 07, 2021
Figure 1 for Hypothesis-driven Stream Learning with Augmented Memory
Figure 2 for Hypothesis-driven Stream Learning with Augmented Memory
Figure 3 for Hypothesis-driven Stream Learning with Augmented Memory
Figure 4 for Hypothesis-driven Stream Learning with Augmented Memory
Viaarxiv icon

When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes

Add code
Apr 06, 2021
Figure 1 for When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes
Figure 2 for When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes
Figure 3 for When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes
Figure 4 for When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes
Viaarxiv icon

Look Twice: A Computational Model of Return Fixations across Tasks and Species

Add code
Jan 05, 2021
Figure 1 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Figure 2 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Figure 3 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Figure 4 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Viaarxiv icon

What am I Searching for: Zero-shot Target Identity Inference in Visual Search

Add code
May 28, 2020
Figure 1 for What am I Searching for: Zero-shot Target Identity Inference in Visual Search
Figure 2 for What am I Searching for: Zero-shot Target Identity Inference in Visual Search
Figure 3 for What am I Searching for: Zero-shot Target Identity Inference in Visual Search
Figure 4 for What am I Searching for: Zero-shot Target Identity Inference in Visual Search
Viaarxiv icon

Putting visual object recognition in context

Add code
Dec 09, 2019
Figure 1 for Putting visual object recognition in context
Figure 2 for Putting visual object recognition in context
Figure 3 for Putting visual object recognition in context
Figure 4 for Putting visual object recognition in context
Viaarxiv icon

Prototype Reminding for Continual Learning

Add code
May 23, 2019
Figure 1 for Prototype Reminding for Continual Learning
Figure 2 for Prototype Reminding for Continual Learning
Figure 3 for Prototype Reminding for Continual Learning
Figure 4 for Prototype Reminding for Continual Learning
Viaarxiv icon

Lift-the-Flap: Context Reasoning Using Object-Centered Graphs

Add code
Feb 01, 2019
Figure 1 for Lift-the-Flap: Context Reasoning Using Object-Centered Graphs
Figure 2 for Lift-the-Flap: Context Reasoning Using Object-Centered Graphs
Figure 3 for Lift-the-Flap: Context Reasoning Using Object-Centered Graphs
Figure 4 for Lift-the-Flap: Context Reasoning Using Object-Centered Graphs
Viaarxiv icon

Egocentric Spatial Memory

Add code
Jul 31, 2018
Figure 1 for Egocentric Spatial Memory
Figure 2 for Egocentric Spatial Memory
Figure 3 for Egocentric Spatial Memory
Viaarxiv icon