Picture for Myra Cheng

Myra Cheng

NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps

Add code
Apr 02, 2024
Figure 1 for NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps
Figure 2 for NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps
Figure 3 for NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps
Figure 4 for NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps
Viaarxiv icon

AnthroScore: A Computational Linguistic Measure of Anthropomorphism

Add code
Feb 03, 2024
Viaarxiv icon

CoMPosT: Characterizing and Evaluating Caricature in LLM Simulations

Add code
Oct 17, 2023
Viaarxiv icon

The Surveillance AI Pipeline

Add code
Sep 26, 2023
Viaarxiv icon

Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models

Add code
May 29, 2023
Figure 1 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Figure 2 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Figure 3 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Figure 4 for Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Viaarxiv icon

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Nov 07, 2022
Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon

Ethical and social risks of harm from Language Models

Add code
Dec 08, 2021
Figure 1 for Ethical and social risks of harm from Language Models
Figure 2 for Ethical and social risks of harm from Language Models
Viaarxiv icon

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

Add code
Aug 29, 2021
Figure 1 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 2 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 3 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Figure 4 for Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Viaarxiv icon

Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

Add code
Mar 13, 2020
Figure 1 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Figure 2 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Figure 3 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Figure 4 for Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits
Viaarxiv icon