Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Julius Adebayo

Debugging Tests for Model Explanations

Nov 10, 2020
Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim

* A shorter version of this work will appear at Neurips 2020 

  Access Paper or Ask Questions

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

Aug 06, 2020
Nishanth Arun, Nathan Gaw, Praveer Singh, Ken Chang, Mehak Aggarwal, Bryan Chen, Katharina Hoebel, Sharut Gupta, Jay Patel, Mishka Gidwani, Julius Adebayo, Matthew D. Li, Jayashree Kalpathy-Cramer

* Submitted to Nature Machine Intelligence. First four authors contributed equally to this work 

  Access Paper or Ask Questions

Explaining Explanations to Society

Jan 19, 2019
Leilani H. Gilpin, Cecilia Testart, Nathaniel Fruchter, Julius Adebayo

* NeurIPS 2018 Workshop on Ethical, Social and Governance Issues in AI 

  Access Paper or Ask Questions

Sanity Checks for Saliency Maps

Oct 28, 2018
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim

* NIPS 2018 Camera Ready Version 

  Access Paper or Ask Questions

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Oct 08, 2018
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

* Workshop Track International Conference on Learning Representations (ICLR) 

  Access Paper or Ask Questions

Investigating Human + Machine Complementarity for Recidivism Predictions

Aug 28, 2018
Sarah Tan, Julius Adebayo, Kori Inkpen, Ece Kamar

  Access Paper or Ask Questions

The (Un)reliability of saliency methods

Nov 02, 2017
Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim

  Access Paper or Ask Questions

Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models

Nov 15, 2016
Julius Adebayo, Lalana Kagal

  Access Paper or Ask Questions