Picture for Margaret Mitchell

Margaret Mitchell

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

A Human Rights-Based Approach to Responsible AI

Add code
Oct 06, 2022
Figure 1 for A Human Rights-Based Approach to Responsible AI
Viaarxiv icon

Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements

Add code
Oct 06, 2022
Figure 1 for Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Figure 2 for Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Figure 3 for Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Viaarxiv icon

Measuring Model Biases in the Absence of Ground Truth

Add code
Mar 05, 2021
Figure 1 for Measuring Model Biases in the Absence of Ground Truth
Figure 2 for Measuring Model Biases in the Absence of Ground Truth
Figure 3 for Measuring Model Biases in the Absence of Ground Truth
Figure 4 for Measuring Model Biases in the Absence of Ground Truth
Viaarxiv icon

Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure

Add code
Oct 23, 2020
Figure 1 for Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure
Figure 2 for Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure
Figure 3 for Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure
Figure 4 for Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure
Viaarxiv icon

Diversity and Inclusion Metrics in Subset Selection

Add code
Feb 09, 2020
Figure 1 for Diversity and Inclusion Metrics in Subset Selection
Figure 2 for Diversity and Inclusion Metrics in Subset Selection
Figure 3 for Diversity and Inclusion Metrics in Subset Selection
Figure 4 for Diversity and Inclusion Metrics in Subset Selection
Viaarxiv icon

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

Add code
Oct 09, 2019
Figure 1 for Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Figure 2 for Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Figure 3 for Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Figure 4 for Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Viaarxiv icon

Detecting Bias with Generative Counterfactual Face Attribute Augmentation

Add code
Jun 18, 2019
Figure 1 for Detecting Bias with Generative Counterfactual Face Attribute Augmentation
Figure 2 for Detecting Bias with Generative Counterfactual Face Attribute Augmentation
Figure 3 for Detecting Bias with Generative Counterfactual Face Attribute Augmentation
Figure 4 for Detecting Bias with Generative Counterfactual Face Attribute Augmentation
Viaarxiv icon

50 Years of Test fairness: Lessons for Machine Learning

Add code
Dec 03, 2018
Figure 1 for 50 Years of Test fairness: Lessons for Machine Learning
Figure 2 for 50 Years of Test fairness: Lessons for Machine Learning
Figure 3 for 50 Years of Test fairness: Lessons for Machine Learning
Figure 4 for 50 Years of Test fairness: Lessons for Machine Learning
Viaarxiv icon

InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity

Add code
Jul 17, 2018
Figure 1 for InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity
Figure 2 for InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity
Figure 3 for InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity
Figure 4 for InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity
Viaarxiv icon