Picture for Daniele Magazzeni

Daniele Magazzeni

University of Chieti, Italy

Towards Accelerating Benders Decomposition via Reinforcement Learning Surrogate Models

Add code
Jul 17, 2023
Figure 1 for Towards Accelerating Benders Decomposition via Reinforcement Learning Surrogate Models
Figure 2 for Towards Accelerating Benders Decomposition via Reinforcement Learning Surrogate Models
Figure 3 for Towards Accelerating Benders Decomposition via Reinforcement Learning Surrogate Models
Figure 4 for Towards Accelerating Benders Decomposition via Reinforcement Learning Surrogate Models
Viaarxiv icon

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

Add code
Jul 13, 2023
Figure 1 for On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Figure 2 for On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Figure 3 for On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Figure 4 for On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Viaarxiv icon

SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features

Add code
Jul 10, 2023
Figure 1 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Figure 2 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Figure 3 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Figure 4 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Viaarxiv icon

GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations

Add code
May 26, 2023
Figure 1 for GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations
Figure 2 for GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations
Figure 3 for GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations
Figure 4 for GLOBE-CE: A Translation-Based Approach for Global Counterfactual Explanations
Viaarxiv icon

Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees

Add code
May 19, 2023
Figure 1 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Figure 2 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Figure 3 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Figure 4 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Viaarxiv icon

Bayesian Hierarchical Models for Counterfactual Estimation

Add code
Jan 21, 2023
Figure 1 for Bayesian Hierarchical Models for Counterfactual Estimation
Figure 2 for Bayesian Hierarchical Models for Counterfactual Estimation
Figure 3 for Bayesian Hierarchical Models for Counterfactual Estimation
Figure 4 for Bayesian Hierarchical Models for Counterfactual Estimation
Viaarxiv icon

Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions

Add code
Nov 21, 2022
Figure 1 for Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
Figure 2 for Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
Figure 3 for Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
Figure 4 for Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
Viaarxiv icon

Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning

Add code
Nov 11, 2022
Figure 1 for Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning
Figure 2 for Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning
Figure 3 for Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning
Figure 4 for Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning
Viaarxiv icon

Towards learning to explain with concept bottleneck models: mitigating information leakage

Add code
Nov 07, 2022
Figure 1 for Towards learning to explain with concept bottleneck models: mitigating information leakage
Figure 2 for Towards learning to explain with concept bottleneck models: mitigating information leakage
Viaarxiv icon

Feature Importance for Time Series Data: Improving KernelSHAP

Add code
Oct 05, 2022
Figure 1 for Feature Importance for Time Series Data: Improving KernelSHAP
Figure 2 for Feature Importance for Time Series Data: Improving KernelSHAP
Viaarxiv icon