Alert button
Picture for Eddie L. Ungless

Eddie L. Ungless

Alert button

Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models

May 26, 2023
Eddie L. Ungless, Björn Ross, Anne Lauscher

Figure 1 for Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models
Figure 2 for Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models
Figure 3 for Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models
Figure 4 for Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models

Cutting-edge image generation has been praised for producing high-quality images, suggesting a ubiquitous future in a variety of applications. However, initial studies have pointed to the potential for harm due to predictive bias, reflecting and potentially reinforcing cultural stereotypes. In this work, we are the first to investigate how multimodal models handle diverse gender identities. Concretely, we conduct a thorough analysis in which we compare the output of three image generation models for prompts containing cisgender vs. non-cisgender identity terms. Our findings demonstrate that certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised. We complement our experimental analysis with (a)~a survey among non-cisgender individuals and (b) a series of interviews, to establish which harms affected individuals anticipate, and how they would like to be represented. We find respondents are particularly concerned about misrepresentation, and the potential to drive harmful behaviours and beliefs. Simple heuristics to limit offensive content are widely rejected, and instead respondents call for community involvement, curated training data and the ability to customise. These improvements could pave the way for a future where change is led by the affected community, and technology is used to positively ``[portray] queerness in ways that we haven't even thought of'' rather than reproducing stale, offensive stereotypes.

* Accepted to ACL Findings 2023 
Viaarxiv icon

A Robust Bias Mitigation Procedure Based on the Stereotype Content Model

Oct 26, 2022
Eddie L. Ungless, Amy Rafferty, Hrichika Nag, Björn Ross

Figure 1 for A Robust Bias Mitigation Procedure Based on the Stereotype Content Model
Figure 2 for A Robust Bias Mitigation Procedure Based on the Stereotype Content Model
Figure 3 for A Robust Bias Mitigation Procedure Based on the Stereotype Content Model

The Stereotype Content model (SCM) states that we tend to perceive minority groups as cold, incompetent or both. In this paper we adapt existing work to demonstrate that the Stereotype Content model holds for contextualised word embeddings, then use these results to evaluate a fine-tuning process designed to drive a language model away from stereotyped portrayals of minority groups. We find the SCM terms are better able to capture bias than demographic agnostic terms related to pleasantness. Further, we were able to reduce the presence of stereotypes in the model through a simple fine-tuning procedure that required minimal human and computer resources, without harming downstream performance. We present this work as a prototype of a debiasing procedure that aims to remove the need for a priori knowledge of the specifics of bias in the model.

Viaarxiv icon