Alert button
Picture for Negar Rostamzadeh

Negar Rostamzadeh

Alert button

A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models

Add code
Bookmark button
Alert button
Mar 18, 2024
Stephen R. Pfohl, Heather Cole-Lewis, Rory Sayres, Darlene Neal, Mercy Asiedu, Awa Dieng, Nenad Tomasev, Qazi Mamunur Rashid, Shekoofeh Azizi, Negar Rostamzadeh, Liam G. McCoy, Leo Anthony Celi, Yun Liu, Mike Schaekermann, Alanna Walton, Alicia Parrish, Chirag Nagpal, Preeti Singh, Akeiylah Dewitt, Philip Mansfield, Sushant Prakash, Katherine Heller, Alan Karthikesalingam, Christopher Semturs, Joelle Barral, Greg Corrado, Yossi Matias, Jamila Smith-Loud, Ivor Horn, Karan Singhal

Figure 1 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Figure 2 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Figure 3 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Figure 4 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Viaarxiv icon

The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa

Add code
Bookmark button
Alert button
Mar 11, 2024
Mercy Asiedu, Awa Dieng, Iskandar Haykel, Negar Rostamzadeh, Stephen Pfohl, Chirag Nagpal, Maria Nagawa, Abigail Oppong, Sanmi Koyejo, Katherine Heller

Figure 1 for The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Figure 2 for The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Figure 3 for The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Figure 4 for The Case for Globalizing Fairness: A Mixed Methods Study on Colonialism, AI, and Health in Africa
Viaarxiv icon

From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML

Add code
Bookmark button
Alert button
Oct 06, 2022
Shalaleh Rismani, Renee Shelby, Andrew Smart, Edgar Jatho, Joshua Kroll, AJung Moon, Negar Rostamzadeh

Figure 1 for From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Figure 2 for From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Figure 3 for From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Viaarxiv icon

Inducing bias is simpler than you think

Add code
Bookmark button
Alert button
May 31, 2022
Stefano Sarao Mannelli, Federica Gerace, Negar Rostamzadeh, Luca Saglietti

Figure 1 for Inducing bias is simpler than you think
Figure 2 for Inducing bias is simpler than you think
Figure 3 for Inducing bias is simpler than you think
Figure 4 for Inducing bias is simpler than you think
Viaarxiv icon

Evaluation Gaps in Machine Learning Practice

Add code
Bookmark button
Alert button
May 11, 2022
Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, Vinodkumar Prabhakaran

Figure 1 for Evaluation Gaps in Machine Learning Practice
Figure 2 for Evaluation Gaps in Machine Learning Practice
Figure 3 for Evaluation Gaps in Machine Learning Practice
Figure 4 for Evaluation Gaps in Machine Learning Practice
Viaarxiv icon

Disability prediction in multiple sclerosis using performance outcome measures and demographic data

Add code
Bookmark button
Alert button
Apr 08, 2022
Subhrajit Roy, Diana Mincu, Lev Proleev, Negar Rostamzadeh, Chintan Ghate, Natalie Harris, Christina Chen, Jessica Schrouff, Nenad Tomasev, Fletcher Lee Hartsell, Katherine Heller

Figure 1 for Disability prediction in multiple sclerosis using performance outcome measures and demographic data
Figure 2 for Disability prediction in multiple sclerosis using performance outcome measures and demographic data
Figure 3 for Disability prediction in multiple sclerosis using performance outcome measures and demographic data
Figure 4 for Disability prediction in multiple sclerosis using performance outcome measures and demographic data
Viaarxiv icon

Healthsheet: Development of a Transparency Artifact for Health Datasets

Add code
Bookmark button
Alert button
Feb 26, 2022
Negar Rostamzadeh, Diana Mincu, Subhrajit Roy, Andrew Smart, Lauren Wilcox, Mahima Pushkarna, Jessica Schrouff, Razvan Amironesei, Nyalleng Moorosi, Katherine Heller

Figure 1 for Healthsheet: Development of a Transparency Artifact for Health Datasets
Viaarxiv icon

se-Shweshwe Inspired Fashion Generation

Add code
Bookmark button
Alert button
Feb 25, 2022
Lindiwe Brigitte Malobola, Negar Rostamzadeh, Shakir Mohamed

Figure 1 for se-Shweshwe Inspired Fashion Generation
Figure 2 for se-Shweshwe Inspired Fashion Generation
Figure 3 for se-Shweshwe Inspired Fashion Generation
Viaarxiv icon

Ethics and Creativity in Computer Vision

Add code
Bookmark button
Alert button
Dec 06, 2021
Negar Rostamzadeh, Emily Denton, Linda Petrini

Viaarxiv icon

Thinking Beyond Distributions in Testing Machine Learned Models

Add code
Bookmark button
Alert button
Dec 06, 2021
Negar Rostamzadeh, Ben Hutchinson, Christina Greer, Vinodkumar Prabhakaran

Viaarxiv icon