Picture for Aylin Caliskan

Aylin Caliskan

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

Add code
Dec 21, 2022
Figure 1 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Figure 2 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Figure 3 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Figure 4 for Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Viaarxiv icon

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

Add code
Nov 07, 2022
Figure 1 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 2 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 3 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Figure 4 for Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale
Viaarxiv icon

American == White in Multimodal Language-and-Image AI

Add code
Jul 01, 2022
Figure 1 for American == White in Multimodal Language-and-Image AI
Figure 2 for American == White in Multimodal Language-and-Image AI
Figure 3 for American == White in Multimodal Language-and-Image AI
Figure 4 for American == White in Multimodal Language-and-Image AI
Viaarxiv icon

Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics

Add code
Jun 07, 2022
Figure 1 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 2 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 3 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Figure 4 for Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Viaarxiv icon

Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals

Add code
Jun 03, 2022
Figure 1 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Figure 2 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Figure 3 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Figure 4 for Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Viaarxiv icon

Markedness in Visual Semantic AI

Add code
May 23, 2022
Figure 1 for Markedness in Visual Semantic AI
Figure 2 for Markedness in Visual Semantic AI
Figure 3 for Markedness in Visual Semantic AI
Figure 4 for Markedness in Visual Semantic AI
Viaarxiv icon

Evidence for Hypodescent in Visual Semantic AI

Add code
May 22, 2022
Figure 1 for Evidence for Hypodescent in Visual Semantic AI
Figure 2 for Evidence for Hypodescent in Visual Semantic AI
Figure 3 for Evidence for Hypodescent in Visual Semantic AI
Figure 4 for Evidence for Hypodescent in Visual Semantic AI
Viaarxiv icon

Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations

Add code
Mar 14, 2022
Figure 1 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Figure 2 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Figure 3 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Figure 4 for Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations
Viaarxiv icon

VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models

Add code
Mar 14, 2022
Figure 1 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Figure 2 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Figure 3 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Figure 4 for VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models
Viaarxiv icon

Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models

Add code
Oct 01, 2021
Figure 1 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Figure 2 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Figure 3 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Figure 4 for Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Viaarxiv icon