Picture for Ulrich Aïvodji

Ulrich Aïvodji

ETS

Towards Fair In-Context Learning with Tabular Foundation Models

Add code
May 15, 2025
Viaarxiv icon

Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy

Add code
May 09, 2025
Viaarxiv icon

Adaptive Group Robust Ensemble Knowledge Distillation

Add code
Nov 22, 2024
Viaarxiv icon

SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning

Add code
Dec 22, 2023
Viaarxiv icon

Probabilistic Dataset Reconstruction from Interpretable Models

Add code
Aug 29, 2023
Viaarxiv icon

Fairness Under Demographic Scarce Regime

Add code
Jul 24, 2023
Viaarxiv icon

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods

Add code
Mar 08, 2023
Viaarxiv icon

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

Add code
Sep 02, 2022
Figure 1 for Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Figure 2 for Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Figure 3 for Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Figure 4 for Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
Viaarxiv icon

Fooling SHAP with Stealthily Biased Sampling

Add code
May 30, 2022
Figure 1 for Fooling SHAP with Stealthily Biased Sampling
Figure 2 for Fooling SHAP with Stealthily Biased Sampling
Figure 3 for Fooling SHAP with Stealthily Biased Sampling
Figure 4 for Fooling SHAP with Stealthily Biased Sampling
Viaarxiv icon

Characterizing the risk of fairwashing

Add code
Jun 14, 2021
Figure 1 for Characterizing the risk of fairwashing
Figure 2 for Characterizing the risk of fairwashing
Figure 3 for Characterizing the risk of fairwashing
Figure 4 for Characterizing the risk of fairwashing
Viaarxiv icon