Picture for Malak Baslyman

Malak Baslyman

Are We Aligned? A Preliminary Investigation of the Alignment of Responsible AI Values between LLMs and Human Judgment

Add code
Nov 06, 2025
Viaarxiv icon

Data Requirement Goal Modeling for Machine Learning Systems

Add code
Apr 10, 2025
Viaarxiv icon

Text-to-Image Representativity Fairness Evaluation Framework

Add code
Oct 18, 2024
Viaarxiv icon