Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making

Oct 15, 2020
Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit Dhurandhar, Richard Tomsett

* 15 pages, 5 figures 

  Access Paper or Ask Questions

Invariant Risk Minimization Games

Mar 18, 2020
Kartik Ahuja, Karthikeyan Shanmugam, Kush R. Varshney, Amit Dhurandhar


  Access Paper or Ask Questions

Model Agnostic Multilevel Explanations

Mar 12, 2020
Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar

* 21 pages, 9 figures, 1 table 

  Access Paper or Ask Questions

Learning Global Transparent Models from Local Contrastive Explanations

Feb 19, 2020
Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugam, Amit Dhurandhar


  Access Paper or Ask Questions

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovińá, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang


  Access Paper or Ask Questions

Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning

Jun 05, 2019
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovińá

* presented at 2019 ICML Workshop on Human in the Loop Learning (HILL 2019), Long Beach, USA. arXiv admin note: substantial text overlap with arXiv:1805.11648 

  Access Paper or Ask Questions

Model Agnostic Contrastive Explanations for Structured Data

May 31, 2019
Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, Ruchir Puri


  Access Paper or Ask Questions

Leveraging Simple Model Predictions for Enhancing its Performance

May 30, 2019
Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss


  Access Paper or Ask Questions

Generating Contrastive Explanations with Monotonic Attribute Functions

May 29, 2019
Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Karthikeyan Shanmugam, Chun-Chen Tu


  Access Paper or Ask Questions

TED: Teaching AI to Explain its Decisions

Nov 12, 2018
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic

* This article leverages some content from arXiv:1805.11648 

  Access Paper or Ask Questions

Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

Oct 29, 2018
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, Payel Das


  Access Paper or Ask Questions

TIP: Typifying the Interpretability of Procedures

Oct 29, 2018
Amit Dhurandhar, Vijay Iyengar, Ronny Luss, Karthikeyan Shanmugam


  Access Paper or Ask Questions

Teaching Meaningful Explanations

Sep 11, 2018
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic

* 9 pages 

  Access Paper or Ask Questions

Streaming Methods for Restricted Strongly Convex Functions with Applications to Prototype Selection

Jul 21, 2018
Karthik S. Gurumoorthy, Amit Dhurandhar


  Access Paper or Ask Questions

Improving Simple Models with Confidence Profiles

Jul 19, 2018
Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder Olsen

* 16 pages 

  Access Paper or Ask Questions

ProtoDash: Fast Interpretable Prototype Selection

Feb 03, 2018
Karthik S. Gurumoorthy, Amit Dhurandhar, Guillermo Cecchi


  Access Paper or Ask Questions

A Formal Framework to Characterize Interpretability of Procedures

Jul 12, 2017
Amit Dhurandhar, Vijay Iyengar, Ronny Luss, Karthikeyan Shanmugam

* presented at 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017), Sydney, NSW, Australia 

  Access Paper or Ask Questions

Learning with Changing Features

Apr 29, 2017
Amit Dhurandhar, Steve Hanneke, Liu Yang


  Access Paper or Ask Questions

Uncovering Group Level Insights with Accordant Clustering

Apr 07, 2017
Amit Dhurandhar, Margareta Ackerman, Xiang Wang

* accepted to SDM 2017 (oral) 

  Access Paper or Ask Questions

Building an Interpretable Recommender via Loss-Preserving Transformation

Jun 19, 2016
Amit Dhurandhar, Sechan Oh, Marek Petrik

* Presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY 

  Access Paper or Ask Questions