Alert button
Picture for Harini Suresh

Harini Suresh

Alert button

Improved Text Classification via Test-Time Augmentation

Jun 27, 2022
Helen Lu, Divya Shanmugam, Harini Suresh, John Guttag

Figure 1 for Improved Text Classification via Test-Time Augmentation
Figure 2 for Improved Text Classification via Test-Time Augmentation
Figure 3 for Improved Text Classification via Test-Time Augmentation
Figure 4 for Improved Text Classification via Test-Time Augmentation
Viaarxiv icon

Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

Jun 07, 2022
Angie Boggust, Harini Suresh, Hendrik Strobelt, John V. Guttag, Arvind Satyanarayan

Figure 1 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 2 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 3 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 4 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Viaarxiv icon

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Feb 17, 2021
Harini Suresh, Kathleen M. Lewis, John V. Guttag, Arvind Satyanarayan

Figure 1 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 2 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 3 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 4 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Viaarxiv icon

Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs

Jan 24, 2021
Harini Suresh, Steven R. Gomez, Kevin K. Nam, Arvind Satyanarayan

Figure 1 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Figure 2 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Figure 3 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Figure 4 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Viaarxiv icon

Underspecification Presents Challenges for Credibility in Modern Machine Learning

Nov 06, 2020
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley

Figure 1 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 2 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 3 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 4 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Viaarxiv icon

Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

May 22, 2020
Harini Suresh, Natalie Lao, Ilaria Liccardi

Figure 1 for Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Figure 2 for Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Figure 3 for Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Figure 4 for Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Viaarxiv icon

Image segmentation of liver stage malaria infection with spatial uncertainty sampling

Nov 30, 2019
Ava P. Soleimany, Harini Suresh, Jose Javier Gonzalez Ortiz, Divya Shanmugam, Nil Gural, John Guttag, Sangeeta N. Bhatia

Figure 1 for Image segmentation of liver stage malaria infection with spatial uncertainty sampling
Figure 2 for Image segmentation of liver stage malaria infection with spatial uncertainty sampling
Figure 3 for Image segmentation of liver stage malaria infection with spatial uncertainty sampling
Viaarxiv icon

A Framework for Understanding Unintended Consequences of Machine Learning

Jan 28, 2019
Harini Suresh, John V. Guttag

Figure 1 for A Framework for Understanding Unintended Consequences of Machine Learning
Figure 2 for A Framework for Understanding Unintended Consequences of Machine Learning
Viaarxiv icon

Modeling Mistrust in End-of-Life Care

Jun 30, 2018
Willie Boag, Harini Suresh, Leo Anthony Celi, Peter Szolovits, Marzyeh Ghassemi

Figure 1 for Modeling Mistrust in End-of-Life Care
Figure 2 for Modeling Mistrust in End-of-Life Care
Figure 3 for Modeling Mistrust in End-of-Life Care
Figure 4 for Modeling Mistrust in End-of-Life Care
Viaarxiv icon

Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU

Jun 07, 2018
Harini Suresh, Jen J. Gong, John Guttag

Figure 1 for Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU
Figure 2 for Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU
Figure 3 for Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU
Figure 4 for Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU
Viaarxiv icon