Picture for Vinay Kumar Sankarapu

Vinay Kumar Sankarapu

Interpretability as Alignment: Making Internal Understanding a Design Principle

Add code
Sep 10, 2025
Figure 1 for Interpretability as Alignment: Making Internal Understanding a Design Principle
Figure 2 for Interpretability as Alignment: Making Internal Understanding a Design Principle
Figure 3 for Interpretability as Alignment: Making Internal Understanding a Design Principle
Figure 4 for Interpretability as Alignment: Making Internal Understanding a Design Principle
Viaarxiv icon

Bridging the Gap in XAI-Why Reliable Metrics Matter for Explainability and Compliance

Add code
Feb 07, 2025
Figure 1 for Bridging the Gap in XAI-Why Reliable Metrics Matter for Explainability and Compliance
Figure 2 for Bridging the Gap in XAI-Why Reliable Metrics Matter for Explainability and Compliance
Figure 3 for Bridging the Gap in XAI-Why Reliable Metrics Matter for Explainability and Compliance
Viaarxiv icon

xai_evals : A Framework for Evaluating Post-Hoc Local Explanation Methods

Add code
Feb 05, 2025
Figure 1 for xai_evals : A Framework for Evaluating Post-Hoc Local Explanation Methods
Figure 2 for xai_evals : A Framework for Evaluating Post-Hoc Local Explanation Methods
Figure 3 for xai_evals : A Framework for Evaluating Post-Hoc Local Explanation Methods
Figure 4 for xai_evals : A Framework for Evaluating Post-Hoc Local Explanation Methods
Viaarxiv icon

DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models

Add code
Nov 19, 2024
Figure 1 for DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models
Figure 2 for DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models
Figure 3 for DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models
Figure 4 for DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models
Viaarxiv icon