Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Gabriel Kreiman

What can human minimal videos tell us about dynamic recognition models?

Apr 19, 2021
Guy Ben-Yosef, Gabriel Kreiman, Shimon Ullman

* Published as a workshop paper at Bridging AI and Cognitive Science (ICLR 2020). Extended paper was published at Cognition 

  Access Paper or Ask Questions

Hypothesis-driven Stream Learning with Augmented Memory

Apr 07, 2021
Mengmi Zhang, Rohil Badkundri, Morgan B. Talbot, Gabriel Kreiman

  Access Paper or Ask Questions

When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes

Apr 06, 2021
Philipp Bomatter, Mengmi Zhang, Dimitar Karev, Spandan Madan, Claire Tseng, Gabriel Kreiman

  Access Paper or Ask Questions

Look Twice: A Computational Model of Return Fixations across Tasks and Species

Jan 05, 2021
Mengmi Zhang, Will Xiao, Olivia Rose, Katarina Bendtz, Margaret Livingstone, Carlos Ponce, Gabriel Kreiman

  Access Paper or Ask Questions

Adversarial images for the primate brain

Nov 11, 2020
Li Yuan, Will Xiao, Gabriel Kreiman, Francis E. H. Tay, Jiashi Feng, Margaret S. Livingstone

  Access Paper or Ask Questions

What am I Searching for: Zero-shot Target Identity Inference in Visual Search

May 28, 2020
Mengmi Zhang, Gabriel Kreiman

* this was a mistaken new submission and a pointer to arXiv:1807.11926 

  Access Paper or Ask Questions

Can Deep Learning Recognize Subtle Human Activities?

Mar 30, 2020
Vincent Jacquot, Zhuofan Ying, Gabriel Kreiman

* poster at CVPR 2020, includes supplementary figures 

  Access Paper or Ask Questions

Removable and/or Repeated Units Emerge in Overparametrized Deep Neural Networks

Dec 21, 2019
Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman

  Access Paper or Ask Questions

Putting visual object recognition in context

Dec 09, 2019
Mengmi Zhang, Claire Tseng, Gabriel Kreiman

* 8 pages, conference 

  Access Paper or Ask Questions

Gradient-free activation maximization for identifying effective stimuli

May 01, 2019
Will Xiao, Gabriel Kreiman

* 16 pages, 8 figures, 3 tables 

  Access Paper or Ask Questions

Lift-the-Flap: Context Reasoning Using Object-Centered Graphs

Feb 01, 2019
Mengmi Zhang, Jiashi Feng, Karla Montejo, Joseph Kwon, Joo Hwee Lim, Gabriel Kreiman

  Access Paper or Ask Questions

What am I searching for?

Jul 31, 2018
Mengmi Zhang, Jiashi Feng, Joo Hwee Lim, Qi Zhao, Gabriel Kreiman

* 10 pages, 4 figures, 1 table, under review in NIPS 2018 

  Access Paper or Ask Questions

Finding any Waldo: zero-shot invariant and efficient visual search

Jul 18, 2018
Mengmi Zhang, Jiashi Feng, Keng Teck Ma, Joo Hwee Lim, Qi Zhao, Gabriel Kreiman

* Number of figures: 6 Number of supplementary figures: 14 

  Access Paper or Ask Questions

Learning Scene Gist with Convolutional Neural Networks to Improve Object Recognition

Jun 09, 2018
Kevin Wu, Eric Wu, Gabriel Kreiman

  Access Paper or Ask Questions

A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception

May 30, 2018
William Lotter, Gabriel Kreiman, David Cox

  Access Paper or Ask Questions

Recurrent computations for visual pattern completion

Apr 06, 2018
Hanlin Tang, Martin Schrimpf, Bill Lotter, Charlotte Moerman, Ana Paredes, Josue Ortega Caro, Walter Hardesty, David Cox, Gabriel Kreiman

  Access Paper or Ask Questions

On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations

Mar 23, 2017
Nicholas Cheney, Martin Schrimpf, Gabriel Kreiman

* under review at ICML 2017 

  Access Paper or Ask Questions

Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning

Mar 01, 2017
William Lotter, Gabriel Kreiman, David Cox

* Code and example video clips can be found here: 

  Access Paper or Ask Questions

Unsupervised Learning of Visual Structure using Predictive Generative Networks

Jan 20, 2016
William Lotter, Gabriel Kreiman, David Cox

* under review as conference paper at ICLR 2016 

  Access Paper or Ask Questions