Alert button
Picture for Inioluwa Deborah Raji

Inioluwa Deborah Raji

Alert button

REFORMS: Reporting Standards for Machine Learning Based Science

Aug 15, 2023
Sayash Kapoor, Emily Cantrell, Kenny Peng, Thanh Hien Pham, Christopher A. Bail, Odd Erik Gundersen, Jake M. Hofman, Jessica Hullman, Michael A. Lones, Momin M. Malik, Priyanka Nanayakkara, Russell A. Poldrack, Inioluwa Deborah Raji, Michael Roberts, Matthew J. Salganik, Marta Serra-Garcia, Brandon M. Stewart, Gilles Vandewiele, Arvind Narayanan

Figure 1 for REFORMS: Reporting Standards for Machine Learning Based Science
Figure 2 for REFORMS: Reporting Standards for Machine Learning Based Science

Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear reporting standards for ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist ($\textbf{Re}$porting Standards $\textbf{For}$ $\textbf{M}$achine Learning Based $\textbf{S}$cience). It consists of 32 questions and a paired set of guidelines. REFORMS was developed based on a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility.

Viaarxiv icon

Organizational Governance of Emerging Technologies: AI Adoption in Healthcare

May 10, 2023
Jee Young Kim, William Boag, Freya Gulamali, Alifia Hasan, Henry David Jeffry Hogg, Mark Lifson, Deirdre Mulligan, Manesh Patel, Inioluwa Deborah Raji, Ajai Sehgal, Keo Shaw, Danny Tobey, Alexandra Valladares, David Vidal, Suresh Balu, Mark Sendak

Figure 1 for Organizational Governance of Emerging Technologies: AI Adoption in Healthcare
Figure 2 for Organizational Governance of Emerging Technologies: AI Adoption in Healthcare
Figure 3 for Organizational Governance of Emerging Technologies: AI Adoption in Healthcare
Figure 4 for Organizational Governance of Emerging Technologies: AI Adoption in Healthcare

Private and public sector structures and norms refine how emerging technology is used in practice. In healthcare, despite a proliferation of AI adoption, the organizational governance surrounding its use and integration is often poorly understood. What the Health AI Partnership (HAIP) aims to do in this research is to better define the requirements for adequate organizational governance of AI systems in healthcare settings and support health system leaders to make more informed decisions around AI adoption. To work towards this understanding, we first identify how the standards for the AI adoption in healthcare may be designed to be used easily and efficiently. Then, we map out the precise decision points involved in the practical institutional adoption of AI technology within specific health systems. Practically, we achieve this through a multi-organizational collaboration with leaders from major health systems across the United States and key informants from related fields. Working with the consultancy IDEO [dot] org, we were able to conduct usability-testing sessions with healthcare and AI ethics professionals. Usability analysis revealed a prototype structured around mock key decision points that align with how organizational leaders approach technology adoption. Concurrently, we conducted semi-structured interviews with 89 professionals in healthcare and other relevant fields. Using a modified grounded theory approach, we were able to identify 8 key decision points and comprehensive procedures throughout the AI adoption lifecycle. This is one of the most detailed qualitative analyses to date of the current governance structures and processes involved in AI adoption by health systems in the United States. We hope these findings can inform future efforts to build capabilities to promote the safe, effective, and responsible adoption of emerging technologies in healthcare.

Viaarxiv icon

The Fallacy of AI Functionality

Jun 20, 2022
Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, Andrew D. Selbst

Figure 1 for The Fallacy of AI Functionality

Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all.To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.

* 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22)  
Viaarxiv icon

AI and the Everything in the Whole Wide World Benchmark

Nov 26, 2021
Inioluwa Deborah Raji, Emily M. Bender, Amandalynne Paullada, Emily Denton, Alex Hanna

Figure 1 for AI and the Everything in the Whole Wide World Benchmark

There is a tendency across different subfields in AI to valorize a small collection of influential benchmarks. These benchmarks operate as stand-ins for a range of anointed common problems that are frequently framed as foundational milestones on the path towards flexible and generalizable AI systems. State-of-the-art performance on these benchmarks is widely understood as indicative of progress towards these long-term goals. In this position paper, we explore the limits of such benchmarks in order to reveal the construct validity issues in their framing as the functionally "general" broad measures of progress they are set up to be.

* Accepted in NeurIPS 2021 Benchmarks and Datasets track 
Viaarxiv icon

About Face: A Survey of Facial Recognition Evaluation

Feb 01, 2021
Inioluwa Deborah Raji, Genevieve Fried

Figure 1 for About Face: A Survey of Facial Recognition Evaluation
Figure 2 for About Face: A Survey of Facial Recognition Evaluation
Figure 3 for About Face: A Survey of Facial Recognition Evaluation
Figure 4 for About Face: A Survey of Facial Recognition Evaluation

We survey over 100 face datasets constructed between 1976 to 2019 of 145 million images of over 17 million subjects from a range of sources, demographics and conditions. Our historical survey reveals that these datasets are contextually informed, shaped by changes in political motivations, technological capability and current norms. We discuss how such influences mask specific practices (some of which may actually be harmful or otherwise problematic) and make a case for the explicit communication of such details in order to establish a more grounded understanding of the technology's function in the real world.

* Presented at AAAI 2020 Workshop on AI Evaluation 
Viaarxiv icon

Data and its (dis)contents: A survey of dataset development and use in machine learning research

Dec 09, 2020
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, Alex Hanna

Datasets have played a foundational role in the advancement of machine learning research. They form the basis for the models we design and deploy, as well as our primary medium for benchmarking and evaluation. Furthermore, the ways in which we collect, construct and share these datasets inform the kinds of problems the field pursues and the methods explored in algorithm development. However, recent work from a breadth of perspectives has revealed the limitations of predominant practices in dataset collection and use. In this paper, we survey the many concerns raised about the way we collect and use data in machine learning and advocate that a more cautious and thorough understanding of data is necessary to address several of the practical and ethical issues of the field.

Viaarxiv icon

ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

Jan 08, 2020
Inioluwa Deborah Raji, Jingying Yang

Figure 1 for ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles

We present the "Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles" (ABOUT ML) project as an initiative to operationalize ML transparency and work towards a standard ML documentation practice. We make the case for the project's relevance and effectiveness in consolidating disparate efforts across a variety of stakeholders, as well as bringing in the perspectives of currently missing voices that will be valuable in shaping future conversations. We describe the details of the initiative and the gaps we hope this project will help address.

* Presented at Human-Centric Machine Learning workshop at Neural Information Processing Systems conference 2019; equal contribution from authors, Jingying Yang is the current program lead for the ABOUT ML project at Partnership on AI, more details can be found about the project at https://www.partnershiponai.org/about-ml/ 
Viaarxiv icon