Alert button
Picture for Sayash Kapoor

Sayash Kapoor

Alert button

A Safe Harbor for AI Evaluation and Red Teaming

Add code
Bookmark button
Alert button
Mar 07, 2024
Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng-Xin Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen, Alexander Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Sandy Pentland, Arvind Narayanan, Percy Liang, Peter Henderson

Figure 1 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 2 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 3 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 4 for A Safe Harbor for AI Evaluation and Red Teaming
Viaarxiv icon

On the Societal Impact of Open Foundation Models

Add code
Bookmark button
Alert button
Feb 27, 2024
Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan

Figure 1 for On the Societal Impact of Open Foundation Models
Figure 2 for On the Societal Impact of Open Foundation Models
Viaarxiv icon

Foundation Model Transparency Reports

Add code
Bookmark button
Alert button
Feb 26, 2024
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang

Viaarxiv icon

The Foundation Model Transparency Index

Add code
Bookmark button
Alert button
Oct 19, 2023
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, Percy Liang

Viaarxiv icon

REFORMS: Reporting Standards for Machine Learning Based Science

Add code
Bookmark button
Alert button
Aug 15, 2023
Sayash Kapoor, Emily Cantrell, Kenny Peng, Thanh Hien Pham, Christopher A. Bail, Odd Erik Gundersen, Jake M. Hofman, Jessica Hullman, Michael A. Lones, Momin M. Malik, Priyanka Nanayakkara, Russell A. Poldrack, Inioluwa Deborah Raji, Michael Roberts, Matthew J. Salganik, Marta Serra-Garcia, Brandon M. Stewart, Gilles Vandewiele, Arvind Narayanan

Figure 1 for REFORMS: Reporting Standards for Machine Learning Based Science
Figure 2 for REFORMS: Reporting Standards for Machine Learning Based Science
Viaarxiv icon

Leakage and the Reproducibility Crisis in ML-based Science

Add code
Bookmark button
Alert button
Jul 14, 2022
Sayash Kapoor, Arvind Narayanan

Figure 1 for Leakage and the Reproducibility Crisis in ML-based Science
Figure 2 for Leakage and the Reproducibility Crisis in ML-based Science
Figure 3 for Leakage and the Reproducibility Crisis in ML-based Science
Viaarxiv icon

The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning

Add code
Bookmark button
Alert button
Apr 06, 2022
Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, Arvind Narayanan

Figure 1 for The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning
Viaarxiv icon

Balanced News Using Constrained Bandit-based Personalization

Add code
Bookmark button
Alert button
Jun 24, 2018
Sayash Kapoor, Vijay Keswani, Nisheeth K. Vishnoi, L. Elisa Celis

Figure 1 for Balanced News Using Constrained Bandit-based Personalization
Figure 2 for Balanced News Using Constrained Bandit-based Personalization
Figure 3 for Balanced News Using Constrained Bandit-based Personalization
Viaarxiv icon

An Algorithmic Framework to Control Bias in Bandit-based Personalization

Add code
Bookmark button
Alert button
Feb 23, 2018
L. Elisa Celis, Sayash Kapoor, Farnood Salehi, Nisheeth K. Vishnoi

Figure 1 for An Algorithmic Framework to Control Bias in Bandit-based Personalization
Figure 2 for An Algorithmic Framework to Control Bias in Bandit-based Personalization
Figure 3 for An Algorithmic Framework to Control Bias in Bandit-based Personalization
Figure 4 for An Algorithmic Framework to Control Bias in Bandit-based Personalization
Viaarxiv icon