Winobias


ASCenD-BDS: Adaptable, Stochastic and Context-aware framework for Detection of Bias, Discrimination and Stereotyping

Add code
Feb 04, 2025
Viaarxiv icon

Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios

Add code
Sep 22, 2024
Figure 1 for Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
Figure 2 for Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
Figure 3 for Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
Figure 4 for Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
Viaarxiv icon

Are Models Biased on Text without Gender-related Language?

Add code
May 01, 2024
Viaarxiv icon

Gender bias and stereotypes in Large Language Models

Add code
Aug 28, 2023
Viaarxiv icon

Second Order WinoBias Test Set for Latent Gender Bias Detection in Coreference Resolution

Add code
Sep 28, 2021
Figure 1 for Second Order WinoBias  Test Set for Latent Gender Bias Detection in Coreference Resolution
Figure 2 for Second Order WinoBias  Test Set for Latent Gender Bias Detection in Coreference Resolution
Figure 3 for Second Order WinoBias  Test Set for Latent Gender Bias Detection in Coreference Resolution
Figure 4 for Second Order WinoBias  Test Set for Latent Gender Bias Detection in Coreference Resolution
Viaarxiv icon

NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives

Add code
Sep 13, 2021
Figure 1 for NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives
Figure 2 for NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives
Figure 3 for NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives
Figure 4 for NeuTral Rewriter: A Rule-Based and Neural Approach to Automatic Rewriting into Gender-Neutral Alternatives
Viaarxiv icon

Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models

Add code
Feb 16, 2021
Figure 1 for Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Figure 2 for Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Figure 3 for Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Figure 4 for Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Viaarxiv icon

Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

Add code
Apr 26, 2020
Figure 1 for Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds
Figure 2 for Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds
Figure 3 for Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds
Viaarxiv icon

WikiCREM: A Large Unsupervised Corpus for Coreference Resolution

Add code
Aug 23, 2019
Figure 1 for WikiCREM: A Large Unsupervised Corpus for Coreference Resolution
Viaarxiv icon

Gender Bias in Contextualized Word Embeddings

Add code
Apr 05, 2019
Figure 1 for Gender Bias in Contextualized Word Embeddings
Figure 2 for Gender Bias in Contextualized Word Embeddings
Figure 3 for Gender Bias in Contextualized Word Embeddings
Viaarxiv icon