Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Hossein Azizpour

Decoupling Inherent Risk and Early Cancer Signs in Image-based Breast Cancer Risk Models

Jul 14, 2020
Yue Liu, Hossein Azizpour, Fredrik Strand, Kevin Smith

* Medical Image Computing and Computer Assisted Interventions 2020 

  Access Paper or Ask Questions

Explanation-based Weakly-supervised Learning of Visual Relations with Graph Networks

Jun 16, 2020
Federico Baldassarre, Kevin Smith, Josephine Sullivan, Hossein Azizpour

  Access Paper or Ask Questions

Recurrent neural networks and Koopman-based frameworks for temporal predictions in turbulence

May 01, 2020
Hamidreza Eivazi, Luca Guastoni, Philipp Schlatter, Hossein Azizpour, Ricardo Vinuesa

* arXiv admin note: significant text overlap with arXiv:2002.01222 

  Access Paper or Ask Questions

Hyperplane Arrangements of Trained ConvNets Are Biased

Mar 17, 2020
Matteo Gamba, Stefan Carlsson, Hossein Azizpour, Mårten Björkman

  Access Paper or Ask Questions

On the use of recurrent neural networks for predictions of turbulent flows

Feb 04, 2020
Luca Guastoni, Prem A. Srinivasan, Hossein Azizpour, Philipp Schlatter, Ricardo Vinuesa

  Access Paper or Ask Questions

Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation

Jun 12, 2019
Erik Englesson, Hossein Azizpour

* Submitted at the ICML 2019 Workshop on Uncertainty & Robustness in Deep Learning(poster & spotlight talk) 

  Access Paper or Ask Questions

Explainability Techniques for Graph Convolutional Networks

May 31, 2019
Federico Baldassarre, Hossein Azizpour

* Accepted at the ICML 2019 Workshop "Learning and Reasoning with Graph-Structured Representations" (poster + spotlight talk) 

  Access Paper or Ask Questions

The role of artificial intelligence in achieving the Sustainable Development Goals

Apr 30, 2019
Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Langhans, Max Tegmark, Francesco Fuso Nerini

  Access Paper or Ask Questions

GANtruth - an unpaired image-to-image translation method for driving scenarios

Nov 26, 2018
Sebastian Bujwid, Miquel Martí, Hossein Azizpour, Alessandro Pieropan

* 32nd Conference on Neural Information Processing Systems (NeurIPS), Machine Learning for Intelligent Transportation Systems Workshop, Montr\'eal, Canada. 2018 

  Access Paper or Ask Questions

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks

Jul 16, 2018
Mattias Teye, Hossein Azizpour, Kevin Smith

* ICML 2018 

  Access Paper or Ask Questions

Factors of Transferability for a Generic ConvNet Representation

Jul 15, 2015
Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, Stefan Carlsson

* Extended version of the workshop paper with more experiments and updated text and title. Original CVPR15 DeepVision workshop paper title: "From Generic to Specific Deep Representations for Visual Recognition" 

  Access Paper or Ask Questions

Spotlight the Negatives: A Generalized Discriminative Latent Model

Jul 08, 2015
Hossein Azizpour, Mostafa Arefiyan, Sobhan Naderi Parizi, Stefan Carlsson

* Published in proceedings of BMVC 2015 

  Access Paper or Ask Questions

Persistent Evidence of Local Image Properties in Generic ConvNets

Nov 24, 2014
Ali Sharif Razavian, Hossein Azizpour, Atsuto Maki, Josephine Sullivan, Carl Henrik Ek, Stefan Carlsson

  Access Paper or Ask Questions

Self-tuned Visual Subclass Learning with Shared Samples An Incremental Approach

May 26, 2014
Hossein Azizpour, Stefan Carlsson

* Updated ICCV 2013 submission 

  Access Paper or Ask Questions

CNN Features off-the-shelf: an Astounding Baseline for Recognition

May 12, 2014
Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, Stefan Carlsson

* version 3 revisions: 1)Added results using feature processing and data augmentation 2)Referring to most recent efforts of using CNN for different visual recognition tasks 3) updated text/caption 

  Access Paper or Ask Questions