Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Himabindu Lakkaraju

Feature Attributions and Counterfactual Explanations Can Be Manipulated


Jun 25, 2021
Dylan Slack, Sophie Hilgard, Sameer Singh, Himabindu Lakkaraju

* arXiv admin note: text overlap with arXiv:2106.02666 

  Access Paper or Ask Questions

What will it take to generate fairness-preserving explanations?


Jun 24, 2021
Jessica Dai, Sohini Upadhyay, Stephen H. Bach, Himabindu Lakkaraju

* Presented at ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI 

  Access Paper or Ask Questions

On the Connections between Counterfactual Explanations and Adversarial Examples


Jun 18, 2021
Martin Pawelczyk, Shalmali Joshi, Chirag Agarwal, Sohini Upadhyay, Himabindu Lakkaraju


  Access Paper or Ask Questions

Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations


Jun 16, 2021
Chirag Agarwal, Marinka Zitnik, Himabindu Lakkaraju


  Access Paper or Ask Questions

Counterfactual Explanations Can Be Manipulated


Jun 04, 2021
Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju, Sameer Singh


  Access Paper or Ask Questions

Learning Under Adversarial and Interventional Shifts


Mar 29, 2021
Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez, Himabindu Lakkaraju

* 19 pages including 5 pages appendix, 6 figures, 2 tables. Preliminary version presented at Causal Discovery & Causality-Inspired Machine Learning Workshop 2020 

  Access Paper or Ask Questions

Towards a Unified Framework for Fair and Stable Graph Representation Learning


Mar 01, 2021
Chirag Agarwal, Himabindu Lakkaraju, Marinka Zitnik


  Access Paper or Ask Questions

Towards Robust and Reliable Algorithmic Recourse


Feb 26, 2021
Sohini Upadhyay, Shalmali Joshi, Himabindu Lakkaraju


  Access Paper or Ask Questions

Towards the Unification and Robustness of Perturbation and Gradient Based Explanations


Feb 21, 2021
Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Zhiwei Steven Wu, Himabindu Lakkaraju


  Access Paper or Ask Questions

Can I Still Trust You?: Understanding the Impact of Distribution Shifts on Algorithmic Recourses


Dec 22, 2020
Kaivalya Rawal, Ece Kamar, Himabindu Lakkaraju


  Access Paper or Ask Questions

Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring


Dec 01, 2020
Tom Sühr, Sophie Hilgard, Himabindu Lakkaraju


  Access Paper or Ask Questions

When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making


Nov 13, 2020
Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage, Himabindu Lakkaraju


  Access Paper or Ask Questions

Robust and Stable Black Box Explanations


Nov 12, 2020
Himabindu Lakkaraju, Nino Arsov, Osbert Bastani


  Access Paper or Ask Questions

Ensuring Actionable Recourse via Adversarial Training


Nov 12, 2020
Alexis Ross, Himabindu Lakkaraju, Osbert Bastani


  Access Paper or Ask Questions

Incorporating Interpretable Output Constraints in Bayesian Neural Networks


Oct 21, 2020
Wanqian Yang, Lars Lorch, Moritz A. Graule, Himabindu Lakkaraju, Finale Doshi-Velez

* 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Code available at: https://github.com/dtak/ocbnn-public 

  Access Paper or Ask Questions

Interpretable and Interactive Summaries of Actionable Recourses


Sep 16, 2020
Kaivalya Rawal, Himabindu Lakkaraju


  Access Paper or Ask Questions

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations


Aug 11, 2020
Dylan Slack, Sophie Hilgard, Sameer Singh, Himabindu Lakkaraju


  Access Paper or Ask Questions

Fair Influence Maximization: A Welfare Optimization Approach


Jun 14, 2020
Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Eric Rice, Milind Tambe


  Access Paper or Ask Questions

"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations


Nov 15, 2019
Himabindu Lakkaraju, Osbert Bastani


  Access Paper or Ask Questions

How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods


Nov 06, 2019
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju


  Access Paper or Ask Questions

Interpretable & Explorable Approximations of Black Box Models


Jul 04, 2017
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Jure Leskovec

* Presented as a poster at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning 

  Access Paper or Ask Questions

Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration


Dec 10, 2016
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Eric Horvitz

* To appear in AAAI 2017; Presented at NIPS Workshop on Reliability in ML, 2016 

  Access Paper or Ask Questions

Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation


Nov 23, 2016
Himabindu Lakkaraju, Cynthia Rudin

* Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems 

  Access Paper or Ask Questions

Learning Cost-Effective Treatment Regimes using Markov Decision Processes


Oct 21, 2016
Himabindu Lakkaraju, Cynthia Rudin


  Access Paper or Ask Questions

Dynamic Multi-Relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media


May 07, 2012
Himabindu Lakkaraju, Indrajit Bhattacharya, Chiranjib Bhattacharyya

* 9 pages 

  Access Paper or Ask Questions