Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

DeAR: Debiasing Vision-Language Models with Additive Residuals


Mar 18, 2023
Ashish Seth, Mayur Hemani, Chirag Agarwal

Add code

* Accepted to CVPR'23. Codes and dataset will be released soon 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

GNNDelete: A General Strategy for Unlearning in Graph Neural Networks


Feb 26, 2023
Jiali Cheng, George Dasoulas, Huan He, Chirag Agarwal, Marinka Zitnik

Add code

* Accepted to ICLR2023 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Towards Estimating Transferability using Hard Subsets


Jan 17, 2023
Tarun Ram Menta, Surgan Jandial, Akash Patil, Vimal KB, Saketh Bachu, Balaji Krishnamurthy, Vineeth N. Balasubramanian, Chirag Agarwal, Mausoom Sarkar

Add code

* First three authors contributed equally 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Towards Training GNNs using Explanation Directed Message Passing


Dec 01, 2022
Valentina Giunchiglia, Chirag Varun Shukla, Guadalupe Gonzalez, Chirag Agarwal

Add code

* Accepted to the proceedings of the First Learning on Graphs Conference (LoG 2022) 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Evaluating Explainability for Graph Neural Networks


Aug 19, 2022
Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, Marinka Zitnik

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

OpenXAI: Towards a Transparent Evaluation of Model Explanations


Jun 22, 2022
Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju

Add code

* Preprint 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Rethinking Stability for Attribution-based Explanations


Mar 14, 2022
Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

A Tale Of Two Long Tails


Jul 27, 2021
Daniel D'souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker

Add code

* Preliminary results accepted to Workshop on Uncertainty and Robustness in Deep Learning (UDL), ICML, 2021 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

On the Connections between Counterfactual Explanations and Adversarial Examples


Jun 18, 2021
Martin Pawelczyk, Shalmali Joshi, Chirag Agarwal, Sohini Upadhyay, Himabindu Lakkaraju

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations


Jun 16, 2021
Chirag Agarwal, Marinka Zitnik, Himabindu Lakkaraju

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
3
>>