Alert button
Picture for Miranda Bogen

Miranda Bogen

Alert button

On the Societal Impact of Open Foundation Models

Feb 27, 2024
Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan

Viaarxiv icon

Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning

Jun 08, 2023
Aditya Srinivas Timmaraju, Mehdi Mashayekhi, Mingliang Chen, Qi Zeng, Quintin Fettes, Wesley Cheung, Yihan Xiao, Manojkumar Rangasamy Kannadasan, Pushkar Tripathi, Sean Gahagan, Miranda Bogen, Rob Roudani

Figure 1 for Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning
Figure 2 for Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning
Figure 3 for Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning
Figure 4 for Towards Fairness in Personalized Ads Using Impression Variance Aware Reinforcement Learning
Viaarxiv icon

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Nov 10, 2022
Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali, Vítor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda Bogen, Pascale Fung, Cristian Canton Ferrer

Figure 1 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 2 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 3 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 4 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Viaarxiv icon

Adaptive Sampling Strategies to Construct Equitable Training Datasets

Jan 31, 2022
William Cai, Ro Encarnacion, Bobbie Chern, Sam Corbett-Davies, Miranda Bogen, Stevie Bergman, Sharad Goel

Figure 1 for Adaptive Sampling Strategies to Construct Equitable Training Datasets
Figure 2 for Adaptive Sampling Strategies to Construct Equitable Training Datasets
Figure 3 for Adaptive Sampling Strategies to Construct Equitable Training Datasets
Viaarxiv icon

Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems

Mar 24, 2021
Chloé Bakalar, Renata Barreto, Stevie Bergman, Miranda Bogen, Bobbie Chern, Sam Corbett-Davies, Melissa Hall, Isabel Kloumann, Michelle Lam, Joaquin Quiñonero Candela, Manish Raghavan, Joshua Simons, Jonathan Tannen, Edmund Tong, Kate Vredenburgh, Jiejing Zhao

Figure 1 for Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems
Figure 2 for Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems
Viaarxiv icon