Alert button
Picture for Kacper Sokol

Kacper Sokol

Alert button

What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks

Add code
Bookmark button
Alert button
Mar 19, 2024
Kacper Sokol, Julia E. Vogt

Figure 1 for What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Figure 2 for What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Viaarxiv icon

Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

Add code
Bookmark button
Alert button
Sep 08, 2023
Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, Raul Santos-Rodriguez

Figure 1 for Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Figure 2 for Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Figure 3 for Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Figure 4 for Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Viaarxiv icon

Navigating Explanatory Multiverse Through Counterfactual Path Geometry

Add code
Bookmark button
Alert button
Jun 05, 2023
Kacper Sokol, Edward Small, Yueqing Xuan

Figure 1 for Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Figure 2 for Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Figure 3 for Navigating Explanatory Multiverse Through Counterfactual Path Geometry
Viaarxiv icon

(Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Explainability

Add code
Bookmark button
Alert button
Jun 04, 2023
Kacper Sokol, Julia E. Vogt

Figure 1 for (Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Explainability
Viaarxiv icon

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness

Add code
Bookmark button
Alert button
Apr 19, 2023
Edward A. Small, Kacper Sokol, Daniel Manning, Flora D. Salim, Jeffrey Chan

Figure 1 for Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness
Figure 2 for Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness
Figure 3 for Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness
Figure 4 for Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness
Viaarxiv icon

More Is Less: When Do Recommenders Underperform for Data-rich Users?

Add code
Bookmark button
Alert button
Apr 15, 2023
Yueqing Xuan, Kacper Sokol, Jeffrey Chan, Mark Sanderson

Figure 1 for More Is Less: When Do Recommenders Underperform for Data-rich Users?
Figure 2 for More Is Less: When Do Recommenders Underperform for Data-rich Users?
Figure 3 for More Is Less: When Do Recommenders Underperform for Data-rich Users?
Figure 4 for More Is Less: When Do Recommenders Underperform for Data-rich Users?
Viaarxiv icon

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations

Add code
Bookmark button
Alert button
Mar 02, 2023
Edward Small, Yueqing Xuan, Danula Hettiachchi, Kacper Sokol

Figure 1 for Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations
Figure 2 for Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations
Viaarxiv icon

Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication

Add code
Bookmark button
Alert button
Feb 07, 2023
Bernard Keenan, Kacper Sokol

Viaarxiv icon

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

Add code
Bookmark button
Alert button
Sep 08, 2022
Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Figure 1 for What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components
Viaarxiv icon

FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems

Add code
Bookmark button
Alert button
Sep 08, 2022
Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, Peter Flach

Figure 1 for FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems
Viaarxiv icon