Crows Pairs


Multiple-Debias: A Full-process Debiasing Method for Multilingual Pre-trained Language Models

Add code
Apr 03, 2026
Viaarxiv icon

Routing Sensitivity Without Controllability: A Diagnostic Study of Fairness in MoE Language Models

Add code
Mar 28, 2026
Viaarxiv icon

Dutch CrowS-Pairs: Adapting a Challenge Dataset for Measuring Social Biases in Language Models for Dutch

Add code
Jul 22, 2025
Figure 1 for Dutch CrowS-Pairs: Adapting a Challenge Dataset for Measuring Social Biases in Language Models for Dutch
Figure 2 for Dutch CrowS-Pairs: Adapting a Challenge Dataset for Measuring Social Biases in Language Models for Dutch
Figure 3 for Dutch CrowS-Pairs: Adapting a Challenge Dataset for Measuring Social Biases in Language Models for Dutch
Figure 4 for Dutch CrowS-Pairs: Adapting a Challenge Dataset for Measuring Social Biases in Language Models for Dutch
Viaarxiv icon

BiasEdit: Debiasing Stereotyped Language Models via Model Editing

Add code
Mar 11, 2025
Figure 1 for BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Figure 2 for BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Figure 3 for BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Figure 4 for BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Viaarxiv icon

ASCenD-BDS: Adaptable, Stochastic and Context-aware framework for Detection of Bias, Discrimination and Stereotyping

Add code
Feb 04, 2025
Viaarxiv icon

Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia

Add code
Dec 10, 2024
Figure 1 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Figure 2 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Figure 3 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Figure 4 for Filipino Benchmarks for Measuring Sexist and Homophobic Bias in Multilingual Language Models from Southeast Asia
Viaarxiv icon

Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models

Add code
Oct 29, 2024
Figure 1 for Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models
Figure 2 for Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models
Figure 3 for Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models
Figure 4 for Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models
Viaarxiv icon

STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions

Add code
Sep 20, 2024
Figure 1 for STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Figure 2 for STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Figure 3 for STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Figure 4 for STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions
Viaarxiv icon

IndiBias: A Benchmark Dataset to Measure Social Biases in Language Models for Indian Context

Add code
Apr 03, 2024
Figure 1 for IndiBias: A Benchmark Dataset to Measure Social Biases in Language Models for Indian Context
Figure 2 for IndiBias: A Benchmark Dataset to Measure Social Biases in Language Models for Indian Context
Figure 3 for IndiBias: A Benchmark Dataset to Measure Social Biases in Language Models for Indian Context
Figure 4 for IndiBias: A Benchmark Dataset to Measure Social Biases in Language Models for Indian Context
Viaarxiv icon

Robust Evaluation Measures for Evaluating Social Biases in Masked Language Models

Add code
Jan 21, 2024
Figure 1 for Robust Evaluation Measures for Evaluating Social Biases in Masked Language Models
Figure 2 for Robust Evaluation Measures for Evaluating Social Biases in Masked Language Models
Figure 3 for Robust Evaluation Measures for Evaluating Social Biases in Masked Language Models
Figure 4 for Robust Evaluation Measures for Evaluating Social Biases in Masked Language Models
Viaarxiv icon