Alert button
Picture for Himabindu Lakkaraju

Himabindu Lakkaraju

Alert button

Word-Level Explanations for Analyzing Bias in Text-to-Image Models

Add code
Bookmark button
Alert button
Jun 03, 2023
Alexander Lin, Lucas Monteiro Paes, Sree Harsha Tanneru, Suraj Srinivas, Himabindu Lakkaraju

Figure 1 for Word-Level Explanations for Analyzing Bias in Text-to-Image Models
Figure 2 for Word-Level Explanations for Analyzing Bias in Text-to-Image Models
Figure 3 for Word-Level Explanations for Analyzing Bias in Text-to-Image Models
Figure 4 for Word-Level Explanations for Analyzing Bias in Text-to-Image Models
Viaarxiv icon

Post Hoc Explanations of Language Models Can Improve Language Models

Add code
Bookmark button
Alert button
May 19, 2023
Satyapriya, Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, Himabindu Lakkaraju

Figure 1 for Post Hoc Explanations of Language Models Can Improve Language Models
Figure 2 for Post Hoc Explanations of Language Models Can Improve Language Models
Figure 3 for Post Hoc Explanations of Language Models Can Improve Language Models
Figure 4 for Post Hoc Explanations of Language Models Can Improve Language Models
Viaarxiv icon

Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten

Add code
Bookmark button
Alert button
Feb 10, 2023
Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju

Figure 1 for Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
Figure 2 for Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
Figure 3 for Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
Figure 4 for Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
Viaarxiv icon

On the Privacy Risks of Algorithmic Recourse

Add code
Bookmark button
Alert button
Nov 10, 2022
Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel

Figure 1 for On the Privacy Risks of Algorithmic Recourse
Figure 2 for On the Privacy Risks of Algorithmic Recourse
Figure 3 for On the Privacy Risks of Algorithmic Recourse
Figure 4 for On the Privacy Risks of Algorithmic Recourse
Viaarxiv icon

Towards Robust Off-Policy Evaluation via Human Inputs

Add code
Bookmark button
Alert button
Sep 18, 2022
Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez, Himabindu Lakkaraju

Figure 1 for Towards Robust Off-Policy Evaluation via Human Inputs
Figure 2 for Towards Robust Off-Policy Evaluation via Human Inputs
Figure 3 for Towards Robust Off-Policy Evaluation via Human Inputs
Figure 4 for Towards Robust Off-Policy Evaluation via Human Inputs
Viaarxiv icon

Evaluating Explainability for Graph Neural Networks

Add code
Bookmark button
Alert button
Aug 19, 2022
Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, Marinka Zitnik

Figure 1 for Evaluating Explainability for Graph Neural Networks
Figure 2 for Evaluating Explainability for Graph Neural Networks
Figure 3 for Evaluating Explainability for Graph Neural Networks
Figure 4 for Evaluating Explainability for Graph Neural Networks
Viaarxiv icon

TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

Add code
Bookmark button
Alert button
Jul 08, 2022
Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, Sameer Singh

Figure 1 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Figure 2 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Figure 3 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Figure 4 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Viaarxiv icon

OpenXAI: Towards a Transparent Evaluation of Model Explanations

Add code
Bookmark button
Alert button
Jun 22, 2022
Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju

Figure 1 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 2 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 3 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 4 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Viaarxiv icon

Flatten the Curve: Efficiently Training Low-Curvature Neural Networks

Add code
Bookmark button
Alert button
Jun 14, 2022
Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju, Francois Fleuret

Figure 1 for Flatten the Curve: Efficiently Training Low-Curvature Neural Networks
Figure 2 for Flatten the Curve: Efficiently Training Low-Curvature Neural Networks
Figure 3 for Flatten the Curve: Efficiently Training Low-Curvature Neural Networks
Figure 4 for Flatten the Curve: Efficiently Training Low-Curvature Neural Networks
Viaarxiv icon