Alert button
Picture for Myra Cheng

Myra Cheng

Alert button

NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps

Add code
Bookmark button
Alert button
Apr 02, 2024
Kristina Gligoric, Myra Cheng, Lucia Zheng, Esin Durmus, Dan Jurafsky

Viaarxiv icon

AnthroScore: A Computational Linguistic Measure of Anthropomorphism

Add code
Bookmark button
Alert button
Feb 03, 2024
Myra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky

Viaarxiv icon

CoMPosT: Characterizing and Evaluating Caricature in LLM Simulations

Add code
Bookmark button
Alert button
Oct 17, 2023
Myra Cheng, Tiziano Piccardi, Diyi Yang

Viaarxiv icon

The Surveillance AI Pipeline

Add code
Bookmark button
Alert button
Sep 26, 2023
Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane

Viaarxiv icon

Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models

Add code
Bookmark button
Alert button
May 29, 2023
Myra Cheng, Esin Durmus, Dan Jurafsky

Figure 1 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Figure 2 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Figure 3 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Figure 4 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Viaarxiv icon

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Bookmark button
Alert button
Nov 07, 2022
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon

Ethical and social risks of harm from Language Models

Add code
Bookmark button
Alert button
Dec 08, 2021
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, Iason Gabriel

Figure 1 for Ethical and social risks of harm from Language Models
Figure 2 for Ethical and social risks of harm from Language Models
Viaarxiv icon

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

Add code
Bookmark button
Alert button
Aug 29, 2021
Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai

Figure 1 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 2 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 3 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 4 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Viaarxiv icon

Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

Add code
Bookmark button
Alert button
Mar 13, 2020
Maegan Tucker, Myra Cheng, Ellen Novoseller, Richard Cheng, Yisong Yue, Joel W. Burdick, Aaron D. Ames

Figure 1 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Figure 2 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Figure 3 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Figure 4 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Viaarxiv icon