Data Science Laboratory, School of Arquitecture, Engineering and Design, Universidad Europea de Madrid, Spain
Abstract:As artificial intelligence (AI) systems become increasingly integrated into critical decision-making processes, the need for transparent and interpretable models has become paramount. In this article we present a new ruleset creation method based on surrogate decision trees (SRules), designed to improve the interpretability of black-box machine learning models. SRules balances the accuracy, coverage, and interpretability of machine learning models by recursively creating surrogate interpretable decision tree models that approximate the decision boundaries of a complex model. We propose a systematic framework for generating concise and meaningful rules from these surrogate models, allowing stakeholders to understand and trust the AI system's decision-making process. Our approach not only provides interpretable rules, but also quantifies the confidence and coverage of these rules. The proposed model allows to adjust its parameters to counteract the lack of interpretability by precision and coverage by allowing a near perfect fit and high interpretability of some parts of the model . The results show that SRules improves on other state-of-the-art techniques and introduces the possibility of creating highly interpretable specific rules for specific sub-parts of the model.
Abstract:The notion of robustness in XAI refers to the observed variations in the explanation of the prediction of a learned model with respect to changes in the input leading to that prediction. Intuitively, if the input being explained is modified slightly subtly enough so as to not change the prediction of the model too much, then we would expect that the explanation provided for that new input does not change much either. We argue that a combination through discriminative averaging of ensembles weak learners explanations can improve the robustness of explanations in ensemble methods.This approach has been implemented and tested with post-hoc SHAP method and Random Forest ensemble with successful results. The improvements obtained have been measured quantitatively and some insights into the explicability robustness in ensemble methods are presented.
Abstract:Over the last years, scientific workflows have become mature enough to be used in a production style. However, despite the increasing maturity, there is still a shortage of tools for searching, adapting, and reusing workflows that hinders a more generalized adoption by the scientific communities. Indeed, due to the limited availability of machine-readable scientific metadata and the heterogeneity of workflow specification formats and representations, new ways to leverage alternative sources of information that complement existing approaches are needed. In this paper we address such limitations by applying statistically enriched generalized trie structures to exploit workflow execution provenance information in order to assist the analysis, indexing and search of scientific workflows. Our method bridges the gap between the description of what a workflow is supposed to do according to its specification and related metadata and what it actually does as recorded in its provenance execution trace. In doing so, we also prove that the proposed method outperforms SPARQL 1.1 Property Paths for querying provenance graphs.