Alert button
Picture for Elizabeth Anne Watkins

Elizabeth Anne Watkins

Alert button

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Bookmark button
Alert button
Apr 18, 2024
Bertie Vidgen, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse Khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, Joaquin Vanschoren

Viaarxiv icon

Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application

Add code
Bookmark button
Alert button
May 15, 2023
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández

Figure 1 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 2 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 3 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Figure 4 for Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application
Viaarxiv icon

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Add code
Bookmark button
Alert button
Oct 02, 2022
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández

Figure 1 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 2 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 3 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 4 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Viaarxiv icon

The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness

Add code
Bookmark button
Alert button
Feb 19, 2022
Elizabeth Anne Watkins, Michael McKenna, Jiahao Chen

Figure 1 for The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness
Figure 2 for The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness
Figure 3 for The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness
Viaarxiv icon