Alert button
Picture for Aylin Caliskan

Aylin Caliskan

Alert button

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Bookmark button
Alert button
Nov 07, 2022
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon

American == White in Multimodal Language-and-Image AI

Add code
Bookmark button
Alert button
Jul 01, 2022
Robert Wolfe, Aylin Caliskan

Figure 1 for American == White in Multimodal Language-and-Image AI
Figure 2 for American == White in Multimodal Language-and-Image AI
Figure 3 for American == White in Multimodal Language-and-Image AI
Figure 4 for American == White in Multimodal Language-and-Image AI
Viaarxiv icon

Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics

Add code
Bookmark button
Alert button
Jun 07, 2022
Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, Mahzarin R. Banaji

Figure 1 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 2 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 3 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 4 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Viaarxiv icon

Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals

Add code
Bookmark button
Alert button
Jun 03, 2022
Shiva Omrani Sabbaghi, Aylin Caliskan

Figure 1 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Figure 2 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Figure 3 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Figure 4 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Viaarxiv icon

Markedness in Visual Semantic AI

Add code
Bookmark button
Alert button
May 23, 2022
Robert Wolfe, Aylin Caliskan

Figure 1 for Markedness in Visual Semantic AI
Figure 2 for Markedness in Visual Semantic AI
Figure 3 for Markedness in Visual Semantic AI
Figure 4 for Markedness in Visual Semantic AI
Viaarxiv icon

Evidence for Hypodescent in Visual Semantic AI

Add code
Bookmark button
Alert button
May 22, 2022
Robert Wolfe, Mahzarin R. Banaji, Aylin Caliskan

Figure 1 for Evidence for Hypodescent in Visual Semantic AI
Figure 2 for Evidence for Hypodescent in Visual Semantic AI
Figure 3 for Evidence for Hypodescent in Visual Semantic AI
Figure 4 for Evidence for Hypodescent in Visual Semantic AI
Viaarxiv icon

Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations

Add code
Bookmark button
Alert button
Mar 14, 2022
Robert Wolfe, Aylin Caliskan

Figure 1 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Figure 2 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Figure 3 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Figure 4 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Viaarxiv icon

VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models

Add code
Bookmark button
Alert button
Mar 14, 2022
Robert Wolfe, Aylin Caliskan

Figure 1 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Figure 2 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Figure 3 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Figure 4 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Viaarxiv icon

Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models

Add code
Bookmark button
Alert button
Oct 01, 2021
Robert Wolfe, Aylin Caliskan

Figure 1 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Figure 2 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Figure 3 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Figure 4 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Viaarxiv icon

Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

Add code
Bookmark button
Alert button
Oct 28, 2020
Ryan Steed, Aylin Caliskan

Figure 1 for Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Figure 2 for Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Figure 3 for Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Figure 4 for Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Viaarxiv icon