Alert button
Picture for Mustafa Nasir-Moin

Mustafa Nasir-Moin

Alert button

Artificial-intelligence-based molecular classification of diffuse gliomas using rapid, label-free optical imaging

Mar 23, 2023
Todd C. Hollon, Cheng Jiang, Asadur Chowdury, Mustafa Nasir-Moin, Akhil Kondepudi, Alexander Aabedi, Arjun Adapa, Wajd Al-Holou, Jason Heth, Oren Sagher, Pedro Lowenstein, Maria Castro, Lisa Irina Wadiura, Georg Widhalm, Volker Neuschmelting, David Reinecke, Niklas von Spreckelsen, Mitchel S. Berger, Shawn L. Hervey-Jumper, John G. Golfinos, Matija Snuderl, Sandra Camelo-Piragua, Christian Freudiger, Honglak Lee, Daniel A. Orringer

Molecular classification has transformed the management of brain tumors by enabling more accurate prognostication and personalized treatment. However, timely molecular diagnostic testing for patients with brain tumors is limited, complicating surgical and adjuvant treatment and obstructing clinical trial enrollment. In this study, we developed DeepGlioma, a rapid ($< 90$ seconds), artificial-intelligence-based diagnostic screening system to streamline the molecular diagnosis of diffuse gliomas. DeepGlioma is trained using a multimodal dataset that includes stimulated Raman histology (SRH); a rapid, label-free, non-consumptive, optical imaging method; and large-scale, public genomic data. In a prospective, multicenter, international testing cohort of patients with diffuse glioma ($n=153$) who underwent real-time SRH imaging, we demonstrate that DeepGlioma can predict the molecular alterations used by the World Health Organization to define the adult-type diffuse glioma taxonomy (IDH mutation, 1p19q co-deletion and ATRX mutation), achieving a mean molecular classification accuracy of $93.3\pm 1.6\%$. Our results represent how artificial intelligence and optical histology can be used to provide a rapid and scalable adjunct to wet lab methods for the molecular screening of patients with diffuse glioma.

* Paper published in Nature Medicine 
Viaarxiv icon

Identifying and mitigating bias in algorithms used to manage patients in a pandemic

Oct 30, 2021
Yifan Li, Garrett Yoon, Mustafa Nasir-Moin, David Rosenberg, Sean Neifert, Douglas Kondziolka, Eric Karl Oermann

Figure 1 for Identifying and mitigating bias in algorithms used to manage patients in a pandemic

Numerous COVID-19 clinical decision support systems have been developed. However many of these systems do not have the merit for validity due to methodological shortcomings including algorithmic bias. Methods Logistic regression models were created to predict COVID-19 mortality, ventilator status and inpatient status using a real-world dataset consisting of four hospitals in New York City and analyzed for biases against race, gender and age. Simple thresholding adjustments were applied in the training process to establish more equitable models. Results Compared to the naively trained models, the calibrated models showed a 57% decrease in the number of biased trials, while predictive performance, measured by area under the receiver/operating curve (AUC), remained unchanged. After calibration, the average sensitivity of the predictive models increased from 0.527 to 0.955. Conclusion We demonstrate that naively training and deploying machine learning models on real world data for predictive analytics of COVID-19 has a high risk of bias. Simple implemented adjustments or calibrations during model training can lead to substantial and sustained gains in fairness on subsequent deployment.

* 4 pages, 1 tables 
Viaarxiv icon

Patient level simulation and reinforcement learning to discover novel strategies for treating ovarian cancer

Oct 22, 2021
Brian Murphy, Mustafa Nasir-Moin, Grace von Oiste, Viola Chen, Howard A Riina, Douglas Kondziolka, Eric K Oermann

Figure 1 for Patient level simulation and reinforcement learning to discover novel strategies for treating ovarian cancer
Figure 2 for Patient level simulation and reinforcement learning to discover novel strategies for treating ovarian cancer
Figure 3 for Patient level simulation and reinforcement learning to discover novel strategies for treating ovarian cancer
Figure 4 for Patient level simulation and reinforcement learning to discover novel strategies for treating ovarian cancer

The prognosis for patients with epithelial ovarian cancer remains dismal despite improvements in survival for other cancers. Treatment involves multiple lines of chemotherapy and becomes increasingly heterogeneous after first-line therapy. Reinforcement learning with real-world outcomes data has the potential to identify novel treatment strategies to improve overall survival. We design a reinforcement learning environment to model epithelial ovarian cancer treatment trajectories and use model free reinforcement learning to investigate therapeutic regimens for simulated patients.

Viaarxiv icon

Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification

Sep 29, 2020
Jerry Wei, Arief Suriawinata, Bing Ren, Xiaoying Liu, Mikhail Lisovsky, Louis Vaickus, Charles Brown, Michael Baker, Mustafa Nasir-Moin, Naofumi Tomita, Lorenzo Torresani, Jason Wei, Saeed Hassanpour

Figure 1 for Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification
Figure 2 for Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification
Figure 3 for Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification
Figure 4 for Learn like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification

Applying curriculum learning requires both a range of difficulty in data and a method for determining the difficulty of examples. In many tasks, however, satisfying these requirements can be a formidable challenge. In this paper, we contend that histopathology image classification is a compelling use case for curriculum learning. Based on the nature of histopathology images, a range of difficulty inherently exists among examples, and, since medical datasets are often labeled by multiple annotators, annotator agreement can be used as a natural proxy for the difficulty of a given example. Hence, we propose a simple curriculum learning method that trains on progressively-harder images as determined by annotator agreement. We evaluate our hypothesis on the challenging and clinically-important task of colorectal polyp classification. Whereas vanilla training achieves an AUC of 83.7% for this task, a model trained with our proposed curriculum learning approach achieves an AUC of 88.2%, an improvement of 4.5%. Our work aims to inspire researchers to think more creatively and rigorously when choosing contexts for applying curriculum learning.

Viaarxiv icon

Difficulty Translation in Histopathology Images

Apr 27, 2020
Jerry Wei, Arief Suriawinata, Xiaoying Liu, Bing Ren, Mustafa Nasir-Moin, Naofumi Tomita, Jason Wei, Saeed Hassanpour

Figure 1 for Difficulty Translation in Histopathology Images
Figure 2 for Difficulty Translation in Histopathology Images
Figure 3 for Difficulty Translation in Histopathology Images
Figure 4 for Difficulty Translation in Histopathology Images

The unique nature of histopathology images opens the door to domain-specific formulations of image translation models. We propose a difficulty translation model that modifies colorectal histopathology images to become more challenging to classify. Our model comprises a scorer, which provides an output confidence to measure the difficulty of images, and an image translator, which learns to translate images from easy-to-classify to hard-to-classify using a training set defined by the scorer. We present three findings. First, generated images were indeed harder to classify for both human pathologists and machine learning classifiers than their corresponding source images. Second, image classifiers trained with generated images as augmented data performed better on both easy and hard images from an independent test set. Finally, human annotator agreement and our model's measure of difficulty correlated strongly, implying that for future work requiring human annotator agreement, the confidence score of a machine learning classifier could be used instead as a proxy.

* Submitted to 2020 Artificial Intelligence in Medicine (AIME) conference 
Viaarxiv icon