Picture for Aylin Caliskan

Aylin Caliskan

No Thoughts Just AI: Biased LLM Recommendations Limit Human Agency in Resume Screening

Add code
Sep 04, 2025
Viaarxiv icon

Bias Amplification in Stable Diffusion's Representation of Stigma Through Skin Tones and Their Homogeneity

Add code
Aug 24, 2025
Viaarxiv icon

Biases Propagate in Encoder-based Vision-Language Models: A Systematic Analysis From Intrinsic Measures to Zero-shot Retrieval Outcomes

Add code
Jun 06, 2025
Viaarxiv icon

Talent or Luck? Evaluating Attribution Bias in Large Language Models

Add code
May 28, 2025
Viaarxiv icon

VIGNETTE: Socially Grounded Bias Evaluation for Vision-Language Models

Add code
May 28, 2025
Viaarxiv icon

Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders

Add code
Feb 11, 2025
Figure 1 for Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Figure 2 for Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Figure 3 for Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Figure 4 for Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Viaarxiv icon

A Taxonomy of Stereotype Content in Large Language Models

Add code
Jul 31, 2024
Viaarxiv icon

Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval

Add code
Jul 29, 2024
Figure 1 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Figure 2 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Figure 3 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Figure 4 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Viaarxiv icon

Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach

Add code
Jul 24, 2024
Figure 1 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Figure 2 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Figure 3 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Figure 4 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Viaarxiv icon

BiasDora: Exploring Hidden Biased Associations in Vision-Language Models

Add code
Jul 02, 2024
Figure 1 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Figure 2 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Figure 3 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Figure 4 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Viaarxiv icon