Alert button
Picture for David Adkins

David Adkins

Alert button

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

Dec 26, 2022
Narine Kokhlikyan, Bilal Alsallakh, Fulton Wang, Vivek Miglani, Oliver Aobo Yang, David Adkins

Figure 1 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Figure 2 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Figure 3 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Figure 4 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

We propose a fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes. Prior research has primarily focused on mitigating one kind of bias by incorporating complex fairness-driven constraints into optimization objectives or designing additional layers that focus on specific protected attributes. We introduce a simple and generic bias mitigation approach that prevents models from learning relationships between protected attributes and output variable by reducing mutual information between them. We demonstrate that our approach is effective in reducing bias with little or no drop in accuracy. We also show that the models trained with our learning framework become causally fair and insensitive to the values of protected attributes. Finally, we validate our approach by studying feature interactions between protected and non-protected attributes. We demonstrate that these interactions are significantly reduced when applying our bias mitigation.

Viaarxiv icon

Prescriptive and Descriptive Approaches to Machine-Learning Transparency

Apr 27, 2022
David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina

Figure 1 for Prescriptive and Descriptive Approaches to Machine-Learning Transparency
Figure 2 for Prescriptive and Descriptive Approaches to Machine-Learning Transparency

Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on how to mitigate potential shortcomings in order to fix bugs or improve the system's performance. We survey approaches that aim to provide such guidance in a prescriptive way. We further propose a preliminary approach, called Method Cards, which aims to increase the transparency and reproducibility of ML systems by providing prescriptive documentation of commonly-used ML methods and techniques. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations for model developers. We further highlight avenues for improving the user experience of ML engineers based on Method Cards.

* ACM CHI Conference on Human Factors in Computing Systems 2022 
Viaarxiv icon