Alert button
Picture for Renee Shelby

Renee Shelby

Alert button

Harm Amplification in Text-to-Image Models

Add code
Bookmark button
Alert button
Feb 01, 2024
Susan Hao, Renee Shelby, Yuchi Liu, Hansa Srinivasan, Mukul Bhutani, Burcu Karagol Ayan, Shivani Poddar, Sarah Laszlo

Viaarxiv icon

Safety and Fairness for Content Moderation in Generative Models

Add code
Bookmark button
Alert button
Jun 09, 2023
Susan Hao, Piyush Kumar, Sarah Laszlo, Shivani Poddar, Bhaktipriya Radharapu, Renee Shelby

Figure 1 for Safety and Fairness for Content Moderation in Generative Models
Figure 2 for Safety and Fairness for Content Moderation in Generative Models
Figure 3 for Safety and Fairness for Content Moderation in Generative Models
Figure 4 for Safety and Fairness for Content Moderation in Generative Models
Viaarxiv icon

AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia

Add code
Bookmark button
Alert button
May 19, 2023
Rida Qadri, Renee Shelby, Cynthia L. Bennett, Emily Denton

Figure 1 for AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Figure 2 for AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Figure 3 for AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Figure 4 for AI's Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Viaarxiv icon

PaLM 2 Technical Report

Add code
Bookmark button
Alert button
May 17, 2023
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu

Figure 1 for PaLM 2 Technical Report
Figure 2 for PaLM 2 Technical Report
Figure 3 for PaLM 2 Technical Report
Figure 4 for PaLM 2 Technical Report
Viaarxiv icon

From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML

Add code
Bookmark button
Alert button
Oct 06, 2022
Shalaleh Rismani, Renee Shelby, Andrew Smart, Edgar Jatho, Joshua Kroll, AJung Moon, Negar Rostamzadeh

Figure 1 for From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Figure 2 for From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Figure 3 for From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML
Viaarxiv icon