Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Narine Kokhlikyan

Prescriptive and Descriptive Approaches to Machine-Learning Transparency

Apr 27, 2022
David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina

* ACM CHI Conference on Human Factors in Computing Systems 2022 

  Access Paper or Ask Questions

A Tour of Visualization Techniques for Computer Vision Datasets

Apr 19, 2022
Bilal Alsallakh, Pamela Bhattacharya, Vanessa Feng, Narine Kokhlikyan, Orion Reblitz-Richardson, Rahul Rajan, David Yan

  Access Paper or Ask Questions

Investigating sanity checks for saliency maps with image and text classification

Jun 08, 2021
Narine Kokhlikyan, Vivek Miglani, Bilal Alsallakh, Miguel Martin, Orion Reblitz-Richardson

  Access Paper or Ask Questions

Fine-grained Interpretation and Causation Analysis in Deep NLP Models

May 29, 2021
Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani

* Accepted at NAACL Tutorial 

  Access Paper or Ask Questions

Investigating Saturation Effects in Integrated Gradients

Oct 23, 2020
Vivek Miglani, Narine Kokhlikyan, Bilal Alsallakh, Miguel Martin, Orion Reblitz-Richardson

* Presented at ICML Workshop on Human Interpretability in Machine Learning (WHI 2020) 

  Access Paper or Ask Questions

Mind the Pad -- CNNs can Develop Blind Spots

Oct 05, 2020
Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Jun Yuan, Orion Reblitz-Richardson

* Appendix E available at 

  Access Paper or Ask Questions

Captum: A unified and generic model interpretability library for PyTorch

Sep 16, 2020
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, Orion Reblitz-Richardson

  Access Paper or Ask Questions