Alert button
Picture for Kathleen C. Fraser

Kathleen C. Fraser

Alert button

Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes

Add code
Bookmark button
Alert button
Apr 18, 2024
Isar Nejadgholi, Kathleen C. Fraser, Anna Kerkhof, Svetlana Kiritchenko

Viaarxiv icon

Uncovering Bias in Large Vision-Language Models with Counterfactuals

Add code
Bookmark button
Alert button
Mar 29, 2024
Phillip Howard, Anahita Bhiwandiwalla, Kathleen C. Fraser, Svetlana Kiritchenko

Viaarxiv icon

Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images

Add code
Bookmark button
Alert button
Feb 08, 2024
Kathleen C. Fraser, Svetlana Kiritchenko

Viaarxiv icon

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

Add code
Bookmark button
Alert button
Jul 04, 2023
Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, Esma Balkır

Figure 1 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Figure 2 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Figure 3 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Figure 4 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Viaarxiv icon

The crime of being poor

Add code
Bookmark button
Alert button
Mar 24, 2023
Georgina Curto, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser

Figure 1 for The crime of being poor
Figure 2 for The crime of being poor
Figure 3 for The crime of being poor
Viaarxiv icon

A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?

Add code
Bookmark button
Alert button
Feb 14, 2023
Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi

Figure 1 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Figure 2 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Figure 3 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Figure 4 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Viaarxiv icon

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

Add code
Bookmark button
Alert button
Oct 19, 2022
Isar Nejadgholi, Esma Balkır, Kathleen C. Fraser, Svetlana Kiritchenko

Figure 1 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 2 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 3 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 4 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Viaarxiv icon

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Add code
Bookmark button
Alert button
Jun 08, 2022
Esma Balkir, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser

Figure 1 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Figure 2 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Viaarxiv icon

Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy

Add code
Bookmark button
Alert button
May 25, 2022
Kathleen C. Fraser, Svetlana Kiritchenko, Esma Balkir

Figure 1 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Figure 2 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Figure 3 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Figure 4 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Viaarxiv icon

Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection

Add code
Bookmark button
Alert button
May 06, 2022
Esma Balkir, Isar Nejadgholi, Kathleen C. Fraser, Svetlana Kiritchenko

Figure 1 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Figure 2 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Figure 3 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Figure 4 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Viaarxiv icon