Alert button
Picture for Helen Ngo

Helen Ngo

Alert button

Artificial Intelligence Index Report 2023

Oct 05, 2023
Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, Raymond Perrault

Viaarxiv icon

Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements

Oct 06, 2022
Leandro von Werra, Lewis Tunstall, Abhishek Thakur, Alexandra Sasha Luccioni, Tristan Thrush, Aleksandra Piktus, Felix Marty, Nazneen Rajani, Victor Mustar, Helen Ngo, Omar Sanseviero, Mario Šaško, Albert Villanova, Quentin Lhoest, Julien Chaumond, Margaret Mitchell, Alexander M. Rush, Thomas Wolf, Douwe Kiela

Figure 1 for Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Figure 2 for Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Figure 3 for Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Viaarxiv icon

The AI Index 2022 Annual Report

May 02, 2022
Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, John Etchemendy, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Michael Sellitto, Ellie Sakhaee, Yoav Shoham, Jack Clark, Raymond Perrault

Figure 1 for The AI Index 2022 Annual Report
Figure 2 for The AI Index 2022 Annual Report
Figure 3 for The AI Index 2022 Annual Report
Figure 4 for The AI Index 2022 Annual Report
Viaarxiv icon

No News is Good News: A Critique of the One Billion Word Benchmark

Oct 25, 2021
Helen Ngo, João G. M. Araújo, Jeffrey Hui, Nicholas Frosst

Figure 1 for No News is Good News: A Critique of the One Billion Word Benchmark
Figure 2 for No News is Good News: A Critique of the One Billion Word Benchmark
Viaarxiv icon

Mitigating harm in language models with conditional-likelihood filtration

Sep 04, 2021
Helen Ngo, Cooper Raterink, João G. M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, Nicholas Frosst

Figure 1 for Mitigating harm in language models with conditional-likelihood filtration
Figure 2 for Mitigating harm in language models with conditional-likelihood filtration
Figure 3 for Mitigating harm in language models with conditional-likelihood filtration
Figure 4 for Mitigating harm in language models with conditional-likelihood filtration
Viaarxiv icon