Alert button
Picture for Ilya Feige

Ilya Feige

Alert button

Task-specific experimental design for treatment effect estimation

Add code
Bookmark button
Alert button
Jun 08, 2023
Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, Christopher Frye

Figure 1 for Task-specific experimental design for treatment effect estimation
Figure 2 for Task-specific experimental design for treatment effect estimation
Figure 3 for Task-specific experimental design for treatment effect estimation
Figure 4 for Task-specific experimental design for treatment effect estimation
Viaarxiv icon

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

Add code
Bookmark button
Alert button
Oct 23, 2020
Alex Mansbridge, Gregory Barbour, Davide Piras, Christopher Frye, Ilya Feige, David Barber

Figure 1 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Figure 2 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Figure 3 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Figure 4 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Viaarxiv icon

Explainability for fair machine learning

Add code
Bookmark button
Alert button
Oct 14, 2020
Tom Begley, Tobias Schwedes, Christopher Frye, Ilya Feige

Figure 1 for Explainability for fair machine learning
Figure 2 for Explainability for fair machine learning
Figure 3 for Explainability for fair machine learning
Figure 4 for Explainability for fair machine learning
Viaarxiv icon

Human-interpretable model explainability on high-dimensional data

Add code
Bookmark button
Alert button
Oct 14, 2020
Damien de Mijolla, Christopher Frye, Markus Kunesch, John Mansir, Ilya Feige

Figure 1 for Human-interpretable model explainability on high-dimensional data
Figure 2 for Human-interpretable model explainability on high-dimensional data
Figure 3 for Human-interpretable model explainability on high-dimensional data
Figure 4 for Human-interpretable model explainability on high-dimensional data
Viaarxiv icon

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

Add code
Bookmark button
Alert button
Oct 07, 2020
Benoit Gaujac, Ilya Feige, David Barber

Figure 1 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Figure 2 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Figure 3 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Figure 4 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Viaarxiv icon

Learning disentangled representations with the Wasserstein Autoencoder

Add code
Bookmark button
Alert button
Oct 07, 2020
Benoit Gaujac, Ilya Feige, David Barber

Figure 1 for Learning disentangled representations with the Wasserstein Autoencoder
Figure 2 for Learning disentangled representations with the Wasserstein Autoencoder
Figure 3 for Learning disentangled representations with the Wasserstein Autoencoder
Figure 4 for Learning disentangled representations with the Wasserstein Autoencoder
Viaarxiv icon

Shapley-based explainability on the data manifold

Add code
Bookmark button
Alert button
Jun 01, 2020
Christopher Frye, Damien de Mijolla, Laurence Cowton, Megan Stanley, Ilya Feige

Figure 1 for Shapley-based explainability on the data manifold
Figure 2 for Shapley-based explainability on the data manifold
Figure 3 for Shapley-based explainability on the data manifold
Figure 4 for Shapley-based explainability on the data manifold
Viaarxiv icon

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Add code
Bookmark button
Alert button
Oct 14, 2019
Christopher Frye, Ilya Feige, Colin Rowat

Figure 1 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Figure 2 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Figure 3 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Figure 4 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Viaarxiv icon

Parenting: Safe Reinforcement Learning from Human Input

Add code
Bookmark button
Alert button
Feb 18, 2019
Christopher Frye, Ilya Feige

Figure 1 for Parenting: Safe Reinforcement Learning from Human Input
Figure 2 for Parenting: Safe Reinforcement Learning from Human Input
Figure 3 for Parenting: Safe Reinforcement Learning from Human Input
Figure 4 for Parenting: Safe Reinforcement Learning from Human Input
Viaarxiv icon

Invariant-equivariant representation learning for multi-class data

Add code
Bookmark button
Alert button
Feb 08, 2019
Ilya Feige

Figure 1 for Invariant-equivariant representation learning for multi-class data
Figure 2 for Invariant-equivariant representation learning for multi-class data
Figure 3 for Invariant-equivariant representation learning for multi-class data
Figure 4 for Invariant-equivariant representation learning for multi-class data
Viaarxiv icon