Picture for Rory Sayres

Rory Sayres

A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models

Add code
Mar 18, 2024
Figure 1 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Figure 2 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Figure 3 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Figure 4 for A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Viaarxiv icon

Closing the AI generalization gap by adjusting for dermatology condition distribution differences across clinical settings

Add code
Feb 23, 2024
Figure 1 for Closing the AI generalization gap by adjusting for dermatology condition distribution differences across clinical settings
Figure 2 for Closing the AI generalization gap by adjusting for dermatology condition distribution differences across clinical settings
Figure 3 for Closing the AI generalization gap by adjusting for dermatology condition distribution differences across clinical settings
Figure 4 for Closing the AI generalization gap by adjusting for dermatology condition distribution differences across clinical settings
Viaarxiv icon

Towards Expert-Level Medical Question Answering with Large Language Models

Add code
May 16, 2023
Figure 1 for Towards Expert-Level Medical Question Answering with Large Language Models
Figure 2 for Towards Expert-Level Medical Question Answering with Large Language Models
Figure 3 for Towards Expert-Level Medical Question Answering with Large Language Models
Figure 4 for Towards Expert-Level Medical Question Answering with Large Language Models
Viaarxiv icon

Underspecification Presents Challenges for Credibility in Modern Machine Learning

Add code
Nov 06, 2020
Figure 1 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 2 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 3 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Figure 4 for Underspecification Presents Challenges for Credibility in Modern Machine Learning
Viaarxiv icon

Improving Medical Annotation Quality to Decrease Labeling Burden Using Stratified Noisy Cross-Validation

Add code
Sep 22, 2020
Figure 1 for Improving Medical Annotation Quality to Decrease Labeling Burden Using Stratified Noisy Cross-Validation
Figure 2 for Improving Medical Annotation Quality to Decrease Labeling Burden Using Stratified Noisy Cross-Validation
Figure 3 for Improving Medical Annotation Quality to Decrease Labeling Burden Using Stratified Noisy Cross-Validation
Figure 4 for Improving Medical Annotation Quality to Decrease Labeling Burden Using Stratified Noisy Cross-Validation
Viaarxiv icon

Deep Learning to Assess Glaucoma Risk and Associated Features in Fundus Images

Add code
Dec 21, 2018
Figure 1 for Deep Learning to Assess Glaucoma Risk and Associated Features in Fundus Images
Figure 2 for Deep Learning to Assess Glaucoma Risk and Associated Features in Fundus Images
Figure 3 for Deep Learning to Assess Glaucoma Risk and Associated Features in Fundus Images
Figure 4 for Deep Learning to Assess Glaucoma Risk and Associated Features in Fundus Images
Viaarxiv icon

Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program

Add code
Oct 18, 2018
Figure 1 for Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program
Figure 2 for Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program
Figure 3 for Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program
Figure 4 for Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program
Viaarxiv icon

Direct Uncertainty Prediction for Medical Second Opinions

Add code
Sep 13, 2018
Figure 1 for Direct Uncertainty Prediction for Medical Second Opinions
Figure 2 for Direct Uncertainty Prediction for Medical Second Opinions
Figure 3 for Direct Uncertainty Prediction for Medical Second Opinions
Figure 4 for Direct Uncertainty Prediction for Medical Second Opinions
Viaarxiv icon

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

Add code
Jun 07, 2018
Figure 1 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Figure 2 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Figure 3 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Figure 4 for Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Viaarxiv icon