Alert button
Picture for Eshika Saxena

Eshika Saxena

Alert button

Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors

Add code
Bookmark button
Alert button
Feb 02, 2024
Samuel Stevens, Emily Wenger, Cathy Li, Niklas Nolte, Eshika Saxena, François Charton, Kristin Lauter

Viaarxiv icon

OpenXAI: Towards a Transparent Evaluation of Model Explanations

Add code
Bookmark button
Alert button
Jun 22, 2022
Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju

Figure 1 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 2 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 3 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 4 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Viaarxiv icon

Rethinking Stability for Attribution-based Explanations

Add code
Bookmark button
Alert button
Mar 14, 2022
Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju

Figure 1 for Rethinking Stability for Attribution-based Explanations
Figure 2 for Rethinking Stability for Attribution-based Explanations
Figure 3 for Rethinking Stability for Attribution-based Explanations
Figure 4 for Rethinking Stability for Attribution-based Explanations
Viaarxiv icon