Alert button
Picture for Emily Sheng

Emily Sheng

Alert button

A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications

Add code
Bookmark button
Alert button
Oct 26, 2023
Ahmed Magooda, Alec Helyar, Kyle Jackson, David Sullivan, Chad Atalla, Emily Sheng, Dan Vann, Richard Edgar, Hamid Palangi, Roman Lutz, Hongliang Kong, Vincent Yun, Eslam Kamal, Federico Zarfati, Hanna Wallach, Sarah Bird, Mei Chen

Figure 1 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Figure 2 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Figure 3 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Figure 4 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Viaarxiv icon

A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter

Add code
Bookmark button
Alert button
Oct 07, 2022
Kyra Yee, Alice Schoenauer Sebag, Olivia Redfield, Emily Sheng, Matthias Eck, Luca Belli

Figure 1 for A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter
Figure 2 for A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter
Figure 3 for A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter
Figure 4 for A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter
Viaarxiv icon

What do Bias Measures Measure?

Add code
Bookmark button
Alert button
Aug 07, 2021
Sunipa Dev, Emily Sheng, Jieyu Zhao, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Nanyun Peng, Kai-Wei Chang

Figure 1 for What do Bias Measures Measure?
Viaarxiv icon

Societal Biases in Language Generation: Progress and Challenges

Add code
Bookmark button
Alert button
Jun 02, 2021
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

Figure 1 for Societal Biases in Language Generation: Progress and Challenges
Figure 2 for Societal Biases in Language Generation: Progress and Challenges
Figure 3 for Societal Biases in Language Generation: Progress and Challenges
Figure 4 for Societal Biases in Language Generation: Progress and Challenges
Viaarxiv icon

Revealing Persona Biases in Dialogue Systems

Add code
Bookmark button
Alert button
Apr 18, 2021
Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, Nanyun Peng

Figure 1 for Revealing Persona Biases in Dialogue Systems
Figure 2 for Revealing Persona Biases in Dialogue Systems
Figure 3 for Revealing Persona Biases in Dialogue Systems
Figure 4 for Revealing Persona Biases in Dialogue Systems
Viaarxiv icon

Investigating Societal Biases in a Poetry Composition System

Add code
Bookmark button
Alert button
Nov 05, 2020
Emily Sheng, David Uthus

Figure 1 for Investigating Societal Biases in a Poetry Composition System
Figure 2 for Investigating Societal Biases in a Poetry Composition System
Figure 3 for Investigating Societal Biases in a Poetry Composition System
Figure 4 for Investigating Societal Biases in a Poetry Composition System
Viaarxiv icon

"Nice Try, Kiddo": Ad Hominems in Dialogue Systems

Add code
Bookmark button
Alert button
Oct 24, 2020
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

Figure 1 for "Nice Try, Kiddo": Ad Hominems in Dialogue Systems
Figure 2 for "Nice Try, Kiddo": Ad Hominems in Dialogue Systems
Figure 3 for "Nice Try, Kiddo": Ad Hominems in Dialogue Systems
Figure 4 for "Nice Try, Kiddo": Ad Hominems in Dialogue Systems
Viaarxiv icon

Towards Controllable Biases in Language Generation

Add code
Bookmark button
Alert button
May 01, 2020
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

Figure 1 for Towards Controllable Biases in Language Generation
Figure 2 for Towards Controllable Biases in Language Generation
Figure 3 for Towards Controllable Biases in Language Generation
Figure 4 for Towards Controllable Biases in Language Generation
Viaarxiv icon

The Woman Worked as a Babysitter: On Biases in Language Generation

Add code
Bookmark button
Alert button
Sep 03, 2019
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

Figure 1 for The Woman Worked as a Babysitter: On Biases in Language Generation
Figure 2 for The Woman Worked as a Babysitter: On Biases in Language Generation
Figure 3 for The Woman Worked as a Babysitter: On Biases in Language Generation
Figure 4 for The Woman Worked as a Babysitter: On Biases in Language Generation
Viaarxiv icon