Purpose: To evaluate the sensitivity of normalized blood flow index (NBFI) for detecting early diabetic retinopathy (DR). Methods: Optical coherence tomography angiography (OCTA) images of 30 eyes from 20 healthy controls, 21 eyes of diabetic patients with no DR (NoDR) and 26 eyes from 22 patients with mild non-proliferative DR (NPDR) were analyzed in this study. The OCTA images were centered on the fovea and covered a 6 mm x 6 mm area. Enface projections of the superficial vascular plexus (SVP) and the deep capillary plexus (DCP) were obtained for the quantitative OCTA feature analysis. Three quantitative OCTA features were examined: blood vessel density (BVD), blood flow flux (BFF), and normalized blood flow index (NBFI). Each feature was calculated from both the SVP and DCP and their sensitivity to distinguish the three cohorts of the study were evaluated. Results: The only quantitative feature that was capable of distinguishing between all three cohorts was NBFI in the DCP image. Comparative study revealed that both BVD and BFF were able to distinguish the controls from NoDR and mild NPDR. However, neither BVD nor BFF was sensitive enough to separate NoDR from the healthy controls. Conclusion: The NBFI has been demonstrated as a sensitive biomarker of early DR, revealing retinal blood flow abnormality better than traditional BVD and BFF. The NBFI in the DCP was verified as the most sensitive biomarker, supporting that diabetes affects the DCP earlier than SVP in DR.
Self supervised contrastive learning based pretraining allows development of robust and generalized deep learning models with small, labeled datasets, reducing the burden of label generation. This paper aims to evaluate the effect of CL based pretraining on the performance of referrable vs non referrable diabetic retinopathy (DR) classification. We have developed a CL based framework with neural style transfer (NST) augmentation to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical data from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher AUC (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.