Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Aude Oliva

All at Once Network Quantization via Collaborative Knowledge Transfer


Mar 02, 2021
Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Naigang Wang, Bowen Pan Kailash Gopalakrishnan, Aude Oliva, Rogerio Feris, Kate Saenko


  Access Paper or Ask Questions

VA-RED$^2$: Video Adaptive Redundancy Reduction


Feb 15, 2021
Bowen Pan, Rameswar Panda, Camilo Fosco, Chung-Ching Lin, Alex Andonian, Yue Meng, Kate Saenko, Aude Oliva, Rogerio Feris

* Accepted in ICLR 2021 

  Access Paper or Ask Questions

AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition


Feb 10, 2021
Yue Meng, Rameswar Panda, Chung-Ching Lin, Prasanna Sattigeri, Leonid Karlinsky, Kate Saenko, Aude Oliva, Rogerio Feris

* Accepted to ICLR2021 

  Access Paper or Ask Questions

Deep Analysis of CNN-based Spatio-temporal Representations for Action Recognition


Oct 23, 2020
Chun-Fu Chen, Rameswar Panda, Kandan Ramakrishnan, Rogerio Feris, John Cohn, Aude Oliva, Quanfu Fan

* Codes and models are available on https://github.com/IBM/action-recognition-pytorch 

  Access Paper or Ask Questions

Multimodal Memorability: Modeling Effects of Semantics and Decay on Video Memorability


Sep 05, 2020
Anelise Newman, Camilo Fosco, Vincent Casser, Allen Lee, Barry McNamara, Aude Oliva

* European Conference on Computer Vision 

  Access Paper or Ask Questions

We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos


Aug 12, 2020
Alex Andonian, Camilo Fosco, Mathew Monfort, Allen Lee, Rogerio Feris, Carl Vondrick, Aude Oliva

* European Conference on Computer Vision (ECCV) 2020, accepted 

  Access Paper or Ask Questions

AR-Net: Adaptive Frame Resolution for Efficient Action Recognition


Jul 31, 2020
Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, Rogerio Feris


  Access Paper or Ask Questions

Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding


Nov 04, 2019
Mathew Monfort, Kandan Ramakrishnan, Alex Andonian, Barry A McNamara, Alex Lascelles, Bowen Pan, Quanfu Fan, Dan Gutfreund, Rogerio Feris, Aude Oliva


  Access Paper or Ask Questions

Reasoning About Human-Object Interactions Through Dual Attention Networks


Sep 10, 2019
Tete Xiao, Quanfu Fan, Dan Gutfreund, Mathew Monfort, Aude Oliva, Bolei Zhou

* ICCV 2019 

  Access Paper or Ask Questions

GANalyze: Toward Visual Definitions of Cognitive Image Properties


Jun 24, 2019
Authors, :, Lore Goetschalckx, Alex Andonian, Aude Oliva, Phillip Isola

* 17 pages, 15 figures 

  Access Paper or Ask Questions

Cross-view Semantic Segmentation for Sensing Surroundings


Jun 09, 2019
Bowen Pan, Jiankai Sun, Alex Andonian, Aude Oliva, Bolei Zhou


  Access Paper or Ask Questions

The Algonauts Project: A Platform for Communication between the Sciences of Biological and Artificial Intelligence


May 14, 2019
Radoslaw Martin Cichy, Gemma Roig, Alex Andonian, Kshitij Dwivedi, Benjamin Lahner, Alex Lascelles, Yalda Mohsenzadeh, Kandan Ramakrishnan, Aude Oliva

* 4 pages, 2 figures 

  Access Paper or Ask Questions

Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics


Jul 27, 2018
Spandan Madan, Zoya Bylinskii, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, Hanspeter Pfister, Aude Oliva, Fredo Durand


  Access Paper or Ask Questions

Temporal Relational Reasoning in Videos


Jul 25, 2018
Bolei Zhou, Alex Andonian, Aude Oliva, Antonio Torralba

* camera-ready version for ECCV'18 

  Access Paper or Ask Questions

Interpreting Deep Visual Representations via Network Dissection


Jun 26, 2018
Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba

* *B. Zhou and D. Bau contributed equally to this work. 15 pages, 27 figures 

  Access Paper or Ask Questions

Moments in Time Dataset: one million videos for event understanding


Jan 09, 2018
Mathew Monfort, Bolei Zhou, Sarah Adel Bargal, Alex Andonian, Tom Yan, Kandan Ramakrishnan, Lisa Brown, Quanfu Fan, Dan Gutfruend, Carl Vondrick, Aude Oliva


  Access Paper or Ask Questions

Understanding Infographics through Textual and Visual Tag Prediction


Sep 26, 2017
Zoya Bylinskii, Sami Alsheikh, Spandan Madan, Adria Recasens, Kimberli Zhong, Hanspeter Pfister, Fredo Durand, Aude Oliva


  Access Paper or Ask Questions

BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention


Aug 09, 2017
Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Krzysztof Z. Gajos, Aude Oliva, Fredo Durand, Hanspeter Pfister

* TOCHI 2017 

  Access Paper or Ask Questions

Network Dissection: Quantifying Interpretability of Deep Visual Representations


Apr 19, 2017
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba

* First two authors contributed equally. Oral presentation at CVPR 2017 

  Access Paper or Ask Questions

What do different evaluation metrics tell us about saliency models?


Apr 06, 2017
Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, Frédo Durand


  Access Paper or Ask Questions

Places: An Image Database for Deep Scene Understanding


Oct 06, 2016
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, Aude Oliva


  Access Paper or Ask Questions

Deep Neural Networks predict Hierarchical Spatio-temporal Cortical Dynamics of Human Visual Object Recognition


Jan 12, 2016
Radoslaw M. Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, Aude Oliva

* 15 pages, 6 figures 

  Access Paper or Ask Questions

Learning Deep Features for Discriminative Localization


Dec 14, 2015
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba


  Access Paper or Ask Questions

Learning visual biases from human imagination


Nov 16, 2015
Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba

* To appear at NIPS 2015 

  Access Paper or Ask Questions

Object Detectors Emerge in Deep Scene CNNs


Apr 15, 2015
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba

* 12 pages, ICLR 2015 conference paper 

  Access Paper or Ask Questions