Alert button
Picture for Martin Jullum

Martin Jullum

Alert button

Finding Money Launderers Using Heterogeneous Graph Neural Networks

Add code
Bookmark button
Alert button
Jul 25, 2023
Fredrik Johannessen, Martin Jullum

Figure 1 for Finding Money Launderers Using Heterogeneous Graph Neural Networks
Figure 2 for Finding Money Launderers Using Heterogeneous Graph Neural Networks
Figure 3 for Finding Money Launderers Using Heterogeneous Graph Neural Networks
Figure 4 for Finding Money Launderers Using Heterogeneous Graph Neural Networks
Viaarxiv icon

A Comparative Study of Methods for Estimating Conditional Shapley Values and When to Use Them

Add code
Bookmark button
Alert button
May 16, 2023
Lars Henry Berge Olsen, Ingrid Kristine Glad, Martin Jullum, Kjersti Aas

Figure 1 for A Comparative Study of Methods for Estimating Conditional Shapley Values and When to Use Them
Figure 2 for A Comparative Study of Methods for Estimating Conditional Shapley Values and When to Use Them
Figure 3 for A Comparative Study of Methods for Estimating Conditional Shapley Values and When to Use Them
Figure 4 for A Comparative Study of Methods for Estimating Conditional Shapley Values and When to Use Them
Viaarxiv icon

Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features

Add code
Bookmark button
Alert button
Nov 26, 2021
Lars Henry Berge Olsen, Ingrid Kristine Glad, Martin Jullum, Kjersti Aas

Figure 1 for Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Figure 2 for Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Figure 3 for Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Figure 4 for Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Viaarxiv icon

MCCE: Monte Carlo sampling of realistic counterfactual explanations

Add code
Bookmark button
Alert button
Nov 18, 2021
Annabelle Redelmeier, Martin Jullum, Kjersti Aas, Anders Løland

Figure 1 for MCCE: Monte Carlo sampling of realistic counterfactual explanations
Figure 2 for MCCE: Monte Carlo sampling of realistic counterfactual explanations
Figure 3 for MCCE: Monte Carlo sampling of realistic counterfactual explanations
Figure 4 for MCCE: Monte Carlo sampling of realistic counterfactual explanations
Viaarxiv icon

groupShapley: Efficient prediction explanation with Shapley values for feature groups

Add code
Bookmark button
Alert button
Jun 23, 2021
Martin Jullum, Annabelle Redelmeier, Kjersti Aas

Figure 1 for groupShapley: Efficient prediction explanation with Shapley values for feature groups
Figure 2 for groupShapley: Efficient prediction explanation with Shapley values for feature groups
Figure 3 for groupShapley: Efficient prediction explanation with Shapley values for feature groups
Figure 4 for groupShapley: Efficient prediction explanation with Shapley values for feature groups
Viaarxiv icon

Statistical embedding: Beyond principal components

Add code
Bookmark button
Alert button
Jun 03, 2021
Dag Tjøstheim, Martin Jullum, Anders Løland

Figure 1 for Statistical embedding: Beyond principal components
Figure 2 for Statistical embedding: Beyond principal components
Figure 3 for Statistical embedding: Beyond principal components
Figure 4 for Statistical embedding: Beyond principal components
Viaarxiv icon

Explaining predictive models using Shapley values and non-parametric vine copulas

Add code
Bookmark button
Alert button
Feb 12, 2021
Kjersti Aas, Thomas Nagler, Martin Jullum, Anders Løland

Figure 1 for Explaining predictive models using Shapley values and non-parametric vine copulas
Figure 2 for Explaining predictive models using Shapley values and non-parametric vine copulas
Figure 3 for Explaining predictive models using Shapley values and non-parametric vine copulas
Figure 4 for Explaining predictive models using Shapley values and non-parametric vine copulas
Viaarxiv icon

Explaining predictive models with mixed features using Shapley values and conditional inference trees

Add code
Bookmark button
Alert button
Jul 02, 2020
Annabelle Redelmeier, Martin Jullum, Kjersti Aas

Figure 1 for Explaining predictive models with mixed features using Shapley values and conditional inference trees
Figure 2 for Explaining predictive models with mixed features using Shapley values and conditional inference trees
Figure 3 for Explaining predictive models with mixed features using Shapley values and conditional inference trees
Figure 4 for Explaining predictive models with mixed features using Shapley values and conditional inference trees
Viaarxiv icon

Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

Add code
Bookmark button
Alert button
Mar 25, 2019
Kjersti Aas, Martin Jullum, Anders Løland

Figure 1 for Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Figure 2 for Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Figure 3 for Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Figure 4 for Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Viaarxiv icon