Alert button
Picture for Debora Nozza

Debora Nozza

Alert button

FairBelief - Assessing Harmful Beliefs in Language Models

Add code
Bookmark button
Alert button
Feb 27, 2024
Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, Debora Nozza

Viaarxiv icon

A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation

Add code
Bookmark button
Alert button
Oct 25, 2023
Giuseppe Attanasio, Flor Miriam Plaza-del-Arco, Debora Nozza, Anne Lauscher

Viaarxiv icon

Weigh Your Own Words: Improving Hate Speech Counter Narrative Generation via Attention Regularization

Add code
Bookmark button
Alert button
Sep 05, 2023
Helena Bonaldi, Giuseppe Attanasio, Debora Nozza, Marco Guerini

Viaarxiv icon

Leveraging Label Variation in Large Language Models for Zero-Shot Text Classification

Add code
Bookmark button
Alert button
Jul 24, 2023
Flor Miriam Plaza-del-Arco, Debora Nozza, Dirk Hovy

Viaarxiv icon

What about em? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns

Add code
Bookmark button
Alert button
May 25, 2023
Anne Lauscher, Debora Nozza, Archie Crowley, Ehm Miltersen, Dirk Hovy

Figure 1 for What about em? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns
Figure 2 for What about em? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns
Figure 3 for What about em? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns
Figure 4 for What about em? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns
Viaarxiv icon

Measuring Harmful Representations in Scandinavian Language Models

Add code
Bookmark button
Alert button
Nov 21, 2022
Samia Touileb, Debora Nozza

Figure 1 for Measuring Harmful Representations in Scandinavian Language Models
Figure 2 for Measuring Harmful Representations in Scandinavian Language Models
Figure 3 for Measuring Harmful Representations in Scandinavian Language Models
Figure 4 for Measuring Harmful Representations in Scandinavian Language Models
Viaarxiv icon

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Bookmark button
Alert button
Nov 07, 2022
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon

Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages

Add code
Bookmark button
Alert button
Oct 20, 2022
Paul Röttger, Debora Nozza, Federico Bianchi, Dirk Hovy

Figure 1 for Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages
Figure 2 for Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages
Figure 3 for Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages
Figure 4 for Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages
Viaarxiv icon

The State of Profanity Obfuscation in Natural Language Processing

Add code
Bookmark button
Alert button
Oct 14, 2022
Debora Nozza, Dirk Hovy

Figure 1 for The State of Profanity Obfuscation in Natural Language Processing
Figure 2 for The State of Profanity Obfuscation in Natural Language Processing
Figure 3 for The State of Profanity Obfuscation in Natural Language Processing
Figure 4 for The State of Profanity Obfuscation in Natural Language Processing
Viaarxiv icon