Alert button
Picture for Charles Lu

Charles Lu

Alert button

Federated Conformal Predictors for Distributed Uncertainty Quantification

Jun 01, 2023
Charles Lu, Yaodong Yu, Sai Praneeth Karimireddy, Michael I. Jordan, Ramesh Raskar

Figure 1 for Federated Conformal Predictors for Distributed Uncertainty Quantification
Figure 2 for Federated Conformal Predictors for Distributed Uncertainty Quantification
Figure 3 for Federated Conformal Predictors for Distributed Uncertainty Quantification
Figure 4 for Federated Conformal Predictors for Distributed Uncertainty Quantification

Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning since it can be easily applied as a post-processing step to already trained models. In this paper, we extend conformal prediction to the federated learning setting. The main challenge we face is data heterogeneity across the clients - this violates the fundamental tenet of exchangeability required for conformal prediction. We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction (FCP) framework. We show FCP enjoys rigorous theoretical guarantees and excellent empirical performance on several computer vision and medical imaging datasets. Our results demonstrate a practical approach to incorporating meaningful uncertainty quantification in distributed and heterogeneous environments. We provide code used in our experiments https://github.com/clu5/federated-conformal.

* 23 pages, 18 figures, accepted to International Conference on Machine Learning (ICML 2023) 
Viaarxiv icon

Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction

Jul 12, 2022
Charles Lu, Syed Rakin Ahmed, Praveer Singh, Jayashree Kalpathy-Cramer

Figure 1 for Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction
Figure 2 for Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction
Figure 3 for Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction
Figure 4 for Estimating Test Performance for AI Medical Devices under Distribution Shift with Conformal Prediction

Estimating the test performance of software AI-based medical devices under distribution shifts is crucial for evaluating the safety, efficiency, and usability prior to clinical deployment. Due to the nature of regulated medical device software and the difficulty in acquiring large amounts of labeled medical datasets, we consider the task of predicting the test accuracy of an arbitrary black-box model on an unlabeled target domain without modification to the original training process or any distributional assumptions of the original source data (i.e. we treat the model as a "black-box" and only use the predicted output responses). We propose a "black-box" test estimation technique based on conformal prediction and evaluate it against other methods on three medical imaging datasets (mammography, dermatology, and histopathology) under several clinically relevant types of distribution shift (institution, hardware scanner, atlas, hospital). We hope that by promoting practical and effective estimation techniques for black-box models, manufacturers of medical devices will develop more standardized and realistic evaluation procedures to improve the robustness and trustworthiness of clinical AI tools.

* Principles of Distribution Shift (PODS) Workshop at ICML 2022 
Viaarxiv icon

Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets

Jul 05, 2022
Charles Lu, Anastasios N. Angelopoulos, Stuart Pomerantz

Figure 1 for Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets
Figure 2 for Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets
Figure 3 for Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets
Figure 4 for Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets

The regulatory approval and broad clinical deployment of medical AI have been hampered by the perception that deep learning models fail in unpredictable and possibly catastrophic ways. A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results. Recent developments in distribution-free uncertainty quantification present practical solutions for these issues by providing reliability guarantees for black-box models on arbitrary data distributions as formally valid finite-sample prediction intervals. Our work applies these new uncertainty quantification methods -- specifically conformal prediction -- to a deep-learning model for grading the severity of spinal stenosis in lumbar spine MRI. We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity within a user-defined probability (confidence interval). On a dataset of 409 MRI exams processed by the deep-learning model, the conformal method provides tight coverage with small prediction set sizes. Furthermore, we explore the potential clinical applicability of flagging cases with high uncertainty predictions (large prediction sets) by quantifying an increase in the prevalence of significant imaging abnormalities (e.g. motion artifacts, metallic artifacts, and tumors) that could degrade confidence in predictive performance when compared to a random sample of cases.

Viaarxiv icon

Three Applications of Conformal Prediction for Rating Breast Density in Mammography

Jun 23, 2022
Charles Lu, Ken Chang, Praveer Singh, Jayashree Kalpathy-Cramer

Figure 1 for Three Applications of Conformal Prediction for Rating Breast Density in Mammography
Figure 2 for Three Applications of Conformal Prediction for Rating Breast Density in Mammography
Figure 3 for Three Applications of Conformal Prediction for Rating Breast Density in Mammography
Figure 4 for Three Applications of Conformal Prediction for Rating Breast Density in Mammography

Breast cancer is the most common cancers and early detection from mammography screening is crucial in improving patient outcomes. Assessing mammographic breast density is clinically important as the denser breasts have higher risk and are more likely to occlude tumors. Manual assessment by experts is both time-consuming and subject to inter-rater variability. As such, there has been increased interest in the development of deep learning methods for mammographic breast density assessment. Despite deep learning having demonstrated impressive performance in several prediction tasks for applications in mammography, clinical deployment of deep learning systems in still relatively rare; historically, mammography Computer-Aided Diagnoses (CAD) have over-promised and failed to deliver. This is in part due to the inability to intuitively quantify uncertainty of the algorithm for the clinician, which would greatly enhance usability. Conformal prediction is well suited to increase reliably and trust in deep learning tools but they lack realistic evaluations on medical datasets. In this paper, we present a detailed analysis of three possible applications of conformal prediction applied to medical imaging tasks: distribution shift characterization, prediction quality improvement, and subgroup fairness analysis. Our results show the potential of distribution-free uncertainty quantification techniques to enhance trust on AI algorithms and expedite their translation to usage.

* Accepted to Workshop on Distribution-Free Uncertainty Quantification at ICML 2022 
Viaarxiv icon

Distribution-Free Federated Learning with Conformal Predictions

Oct 14, 2021
Charles Lu, Jayasheree Kalpathy-Cramer

Figure 1 for Distribution-Free Federated Learning with Conformal Predictions
Figure 2 for Distribution-Free Federated Learning with Conformal Predictions
Figure 3 for Distribution-Free Federated Learning with Conformal Predictions
Figure 4 for Distribution-Free Federated Learning with Conformal Predictions

Federated learning has attracted considerable interest for collaborative machine learning in healthcare to leverage separate institutional datasets while maintaining patient privacy. However, additional challenges such as poor calibration and lack of interpretability may also hamper widespread deployment of federated models into clinical practice and lead to user distrust or misuse of ML tools in high-stakes clinical decision-making. In this paper, we propose to address these challenges by incorporating an adaptive conformal framework into federated learning to ensure distribution-free prediction sets that provide coverage guarantees and uncertainty estimates without requiring any additional modifications to the model or assumptions. Empirical results on the MedMNIST medical imaging benchmark demonstrate our federated method provide tighter coverage in lower average cardinality over local conformal predictions on 6 different medical imaging benchmark datasets in 2D and 3D multi-class classification tasks. Further, we correlate class entropy and prediction set size to assess task uncertainty with conformal methods.

Viaarxiv icon

Deploying clinical machine learning? Consider the following...

Sep 14, 2021
Charles Lu, Ken Chang, Praveer Singh, Stuart Pomerantz, Sean Doyle, Sujay Kakarmath, Christopher Bridge, Jayashree Kalpathy-Cramer

Figure 1 for Deploying clinical machine learning? Consider the following...

Despite the intense attention and investment into clinical machine learning (CML) research, relatively few applications convert to clinical practice. While research is important in advancing the state-of-the-art, translation is equally important in bringing these technologies into a position to ultimately impact patient care and live up to extensive expectations surrounding AI in healthcare. To better characterize a holistic perspective among researchers and practitioners, we survey several participants with experience in developing CML for clinical deployment about their learned experiences. We collate these insights and identify several main categories of barriers and pitfalls in order to better design and develop clinical machine learning applications.

Viaarxiv icon

Fair Conformal Predictors for Applications in Medical Imaging

Sep 09, 2021
Charles Lu, Andreanne Lemay, Ken Chang, Katharina Hoebel, Jayashree Kalpathy-Cramer

Figure 1 for Fair Conformal Predictors for Applications in Medical Imaging
Figure 2 for Fair Conformal Predictors for Applications in Medical Imaging
Figure 3 for Fair Conformal Predictors for Applications in Medical Imaging
Figure 4 for Fair Conformal Predictors for Applications in Medical Imaging

Deep learning has the potential to augment many components of the clinical workflow, such as medical image interpretation. However, the translation of these black box algorithms into clinical practice has been marred by the relative lack of transparency compared to conventional machine learning methods, hindering in clinician trust in the systems for critical medical decision-making. Specifically, common deep learning approaches do not have intuitive ways of expressing uncertainty with respect to cases that might require further human review. Furthermore, the possibility of algorithmic bias has caused hesitancy regarding the use of developed algorithms in clinical settings. To these ends, we explore how conformal methods can complement deep learning models by providing both clinically intuitive way (by means of confidence prediction sets) of expressing model uncertainty as well as facilitating model transparency in clinical workflows. In this paper, we conduct a field survey with clinicians to assess clinical use-cases of conformal predictions. Next, we conduct experiments with a mammographic breast density and dermatology photography datasets to demonstrate the utility of conformal predictions in "rule-in" and "rule-out" disease scenarios. Further, we show that conformal predictors can be used to equalize coverage with respect to patient demographics such as race and skin tone. We find that a conformal predictions to be a promising framework with potential to increase clinical usability and transparency for better collaboration between deep learning algorithms and clinicians.

Viaarxiv icon

Evaluating subgroup disparity using epistemic uncertainty in mammography

Jul 15, 2021
Charles Lu, Andreanne Lemay, Katharina Hoebel, Jayashree Kalpathy-Cramer

Figure 1 for Evaluating subgroup disparity using epistemic uncertainty in mammography
Figure 2 for Evaluating subgroup disparity using epistemic uncertainty in mammography
Figure 3 for Evaluating subgroup disparity using epistemic uncertainty in mammography
Figure 4 for Evaluating subgroup disparity using epistemic uncertainty in mammography

As machine learning (ML) continue to be integrated into healthcare systems that affect clinical decision making, new strategies will need to be incorporated in order to effectively detect and evaluate subgroup disparities to ensure accountability and generalizability in clinical workflows. In this paper, we explore how epistemic uncertainty can be used to evaluate disparity in patient demographics (race) and data acquisition (scanner) subgroups for breast density assessment on a dataset of 108,190 mammograms collected from 33 clinical sites. Our results show that even if aggregate performance is comparable, the choice of uncertainty quantification metric can significantly the subgroup level. We hope this analysis can promote further work on how uncertainty can be leveraged to increase transparency of machine learning applications for clinical deployment.

* Accepted to the Interpretable Machine Learning in Healthcare workshop at the ICML 2021 conference 
Viaarxiv icon

Addressing catastrophic forgetting for medical domain expansion

Mar 24, 2021
Sharut Gupta, Praveer Singh, Ken Chang, Liangqiong Qu, Mehak Aggarwal, Nishanth Arun, Ashwin Vaswani, Shruti Raghavan, Vibha Agarwal, Mishka Gidwani, Katharina Hoebel, Jay Patel, Charles Lu, Christopher P. Bridge, Daniel L. Rubin, Jayashree Kalpathy-Cramer

Figure 1 for Addressing catastrophic forgetting for medical domain expansion
Figure 2 for Addressing catastrophic forgetting for medical domain expansion
Figure 3 for Addressing catastrophic forgetting for medical domain expansion
Figure 4 for Addressing catastrophic forgetting for medical domain expansion

Model brittleness is a key concern when deploying deep learning models in real-world medical settings. A model that has high performance at one institution may suffer a significant decline in performance when tested at other institutions. While pooling datasets from multiple institutions and retraining may provide a straightforward solution, it is often infeasible and may compromise patient privacy. An alternative approach is to fine-tune the model on subsequent institutions after training on the original institution. Notably, this approach degrades model performance at the original institution, a phenomenon known as catastrophic forgetting. In this paper, we develop an approach to address catastrophic forget-ting based on elastic weight consolidation combined with modulation of batch normalization statistics under two scenarios: first, for expanding the domain from one imaging system's data to another imaging system's, and second, for expanding the domain from a large multi-institutional dataset to another single institution dataset. We show that our approach outperforms several other state-of-the-art approaches and provide theoretical justification for the efficacy of batch normalization modulation. The results of this study are generally applicable to the deployment of any clinical deep learning model which requires domain expansion.

* First three authors contributed equally 
Viaarxiv icon