U-Net based convolutional neural networks with deep feature representation and skip-connections have significantly boosted the performance of medical image segmentation. In this paper, we study the more challenging problem of improving efficiency in modeling global contexts without losing localization ability for low-level details. TransFuse, a novel two-branch architecture is proposed, which combines Transformers and CNNs in a parallel style. With TransFuse, both global dependency and low-level spatial details can be efficiently captured in a much shallower manner. Besides, a novel fusion technique - BiFusion module is proposed to fuse the multi-level features from each branch. TransFuse achieves the newest state-of-the-arts on polyp segmentation task, with 20\% fewer parameters and the fastest inference speed at about 98.7 FPS.
Deep Learning has thrived on the emergence of biomedical big data. However, medical datasets acquired at different institutions have inherent bias caused by various confounding factors such as operation policies, machine protocols, treatment preference and etc. As the result, models trained on one dataset, regardless of volume, cannot be confidently utilized for the others. In this study, we investigated model robustness to dataset bias using three large-scale Chest X-ray datasets: first, we assessed the dataset bias using vanilla training baseline; second, we proposed a novel multi-source domain generalization model by (a) designing a new bias-regularized loss function; and (b) synthesizing new data for domain augmentation. We showed that our model significantly outperformed the baseline and other approaches on data from unseen domain in terms of accuracy and various bias measures, without retraining or finetuning. Our method is generally applicable to other biomedical data, providing new algorithms for training models robust to bias for big data analysis and applications. Demo training code is publicly available.
Deep learning has gained tremendous attention on computer-aided diagnosis applications, particularly biomedical imaging analysis. However, medical datasets are subject to dataset bias problem where data, of same modality and body-part, show different distributions across each institution. Such bias may arise from various confounding factors including operation policies, machine protocols, treatment preference and etc. Consequently, machine learning models train on one hospital sites cannot confidently generalize to the others. In this study, we analyzed three large-scale public Chest X-ray datasets and found that vanilla training of deep models on diagnosing common Thorax Diseases were having exactly the above mentioned dataset bias problem. To mitigate the bias effect, we framed the problem as multi-source domain generalization task and made two contributions: 1. we improved the classical Bias-regularized Learning method by designing a new loss function; 2. we proposed a new domain-guided data argumentation method called MCT (Multi-layer Cross-gradient Training) for synthesizing data of unseen domains. Our model can be deployed directly to new domain data without retraining, meanwhile achieving much less performance degradation compared to other baselines such as train-them-all-together. Empirical studies verified the effectiveness of our methods both quantitatively and qualitatively. Our demo training code is publicly available.
Deep learning has gained tremendous attention on CAD (Computer-aided Diagnosing) application, particularly biomedical imaging analysis. We analyze three large-scale publicly available CXR (Chest X-ray) datasets and find that vanilla training of deep models on diagnosing common Thorax Diseases are subject to dataset bias, leading to severe performance degradation when evaluated on unseen test set. In this work, we frame the problem as multi-source domain generalization task and make two contributions to handle dataset bias: 1. we improve the classical Max-margin loss function by making it more general and smooth; 2. we propose a new training framework named MCT (Multi-layer Cross-gradient Training) for unseen data argumentation. Empirical studies show that our methods significantly improve the model generalization and robustness to dataset bias.