Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Sara Hooker

The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation


Oct 06, 2021
Orevaoghene Ahia, Julia Kreutzer, Sara Hooker

* Accepted to Findings of EMNLP 2021 

  Access Paper or Ask Questions

A Tale Of Two Long Tails


Jul 27, 2021
Daniel D'souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker

* Preliminary results accepted to Workshop on Uncertainty and Robustness in Deep Learning (UDL), ICML, 2021 

  Access Paper or Ask Questions

When does loss-based prioritization fail?


Jul 16, 2021
Niel Teng Hu, Xinyu Hu, Rosanne Liu, Sara Hooker, Jason Yosinski


  Access Paper or Ask Questions

Randomness In Neural Network Training: Characterizing The Impact of Tooling


Jun 22, 2021
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker

* 21 pages, 10 figures 

  Access Paper or Ask Questions

Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization


Feb 02, 2021
Kale-ab Tessera, Sara Hooker, Benjamin Rosman


  Access Paper or Ask Questions

Characterising Bias in Compressed Models


Oct 06, 2020
Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, Emily Denton


  Access Paper or Ask Questions

The Hardware Lottery


Sep 21, 2020
Sara Hooker


  Access Paper or Ask Questions

Estimating Example Difficulty using Variance of Gradients


Aug 26, 2020
Chirag Agarwal, Sara Hooker

* Accepted to Workshop on Human Interpretability in Machine Learning (WHI), ICML, 2020 

  Access Paper or Ask Questions

Selective Brain Damage: Measuring the Disparate Impact of Model Pruning


Nov 13, 2019
Sara Hooker, Aaron Courville, Yann Dauphin, Andrea Frome


  Access Paper or Ask Questions

The State of Sparsity in Deep Neural Networks


Feb 25, 2019
Trevor Gale, Erich Elsen, Sara Hooker


  Access Paper or Ask Questions

Evaluating Feature Importance Estimates


Jun 28, 2018
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim

* presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden 

  Access Paper or Ask Questions

The (Un)reliability of saliency methods


Nov 02, 2017
Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim


  Access Paper or Ask Questions