Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

XAIR: A Framework of Explainable AI in Augmented Reality


Mar 28, 2023
Xuhai Xu, Mengjie Yu, Tanya R. Jonker, Kashyap Todi, Feiyu Lu, Xun Qian, João Marcelo Evangelista Belo, Tianyi Wang, Michelle Li, Aran Mun, Te-Yen Wu, Junxiao Shen, Ting Zhang, Narine Kokhlikyan, Fulton Wang, Paul Sorenson, Sophie Kahyun Kim, Hrvoje Benko

Add code

* Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 

   Access Paper or Ask Questions

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks


Dec 26, 2022
Narine Kokhlikyan, Bilal Alsallakh, Fulton Wang, Vivek Miglani, Oliver Aobo Yang, David Adkins

Add code


   Access Paper or Ask Questions

Prescriptive and Descriptive Approaches to Machine-Learning Transparency


Apr 27, 2022
David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina

Add code

* ACM CHI Conference on Human Factors in Computing Systems 2022 

   Access Paper or Ask Questions

A Tour of Visualization Techniques for Computer Vision Datasets


Apr 19, 2022
Bilal Alsallakh, Pamela Bhattacharya, Vanessa Feng, Narine Kokhlikyan, Orion Reblitz-Richardson, Rahul Rajan, David Yan

Add code


   Access Paper or Ask Questions

Investigating sanity checks for saliency maps with image and text classification


Jun 08, 2021
Narine Kokhlikyan, Vivek Miglani, Bilal Alsallakh, Miguel Martin, Orion Reblitz-Richardson

Add code


   Access Paper or Ask Questions

Fine-grained Interpretation and Causation Analysis in Deep NLP Models


May 29, 2021
Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani

Add code

* Accepted at NAACL Tutorial 

   Access Paper or Ask Questions

Investigating Saturation Effects in Integrated Gradients


Oct 23, 2020
Vivek Miglani, Narine Kokhlikyan, Bilal Alsallakh, Miguel Martin, Orion Reblitz-Richardson

Add code

* Presented at ICML Workshop on Human Interpretability in Machine Learning (WHI 2020) 

   Access Paper or Ask Questions

Mind the Pad -- CNNs can Develop Blind Spots


Oct 05, 2020
Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Jun Yuan, Orion Reblitz-Richardson

Add code

* Appendix E available at https://drive.google.com/file/d/1bIvRQJIBwJbKTfpg0hNaFX2ThuuDO8PU/view?usp=sharing 

   Access Paper or Ask Questions

Captum: A unified and generic model interpretability library for PyTorch


Sep 16, 2020
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, Orion Reblitz-Richardson

Add code


   Access Paper or Ask Questions