Alert button
Picture for Junyu Chen

Junyu Chen

Alert button

Music Source Separation Based on a Lightweight Deep Learning Framework (DTTNET: DUAL-PATH TFC-TDF UNET)

Sep 15, 2023
Junyu Chen, Susmitha Vekkot, Pancham Shukla

Music source separation (MSS) aims to extract 'vocals', 'drums', 'bass' and 'other' tracks from a piece of mixed music. While deep learning methods have shown impressive results, there is a trend toward larger models. In our paper, we introduce a novel and lightweight architecture called DTTNet, which is based on Dual-Path Module and Time-Frequency Convolutions Time-Distributed Fully-connected UNet (TFC-TDF UNet). DTTNet achieves 10.12 dB cSDR on 'vocals' compared to 10.01 dB reported for Bandsplit RNN (BSRNN) but with 86.7% fewer parameters. We also assess pattern-specific performance and model generalization for intricate audio patterns.

* Submitted to ICASSP 2024 
Viaarxiv icon

MomentaMorph: Unsupervised Spatial-Temporal Registration with Momenta, Shooting, and Correction

Aug 05, 2023
Zhangxing Bian, Shuwen Wei, Yihao Liu, Junyu Chen, Jiachen Zhuo, Fangxu Xing, Jonghye Woo, Aaron Carass, Jerry L. Prince

Figure 1 for MomentaMorph: Unsupervised Spatial-Temporal Registration with Momenta, Shooting, and Correction
Figure 2 for MomentaMorph: Unsupervised Spatial-Temporal Registration with Momenta, Shooting, and Correction
Figure 3 for MomentaMorph: Unsupervised Spatial-Temporal Registration with Momenta, Shooting, and Correction
Figure 4 for MomentaMorph: Unsupervised Spatial-Temporal Registration with Momenta, Shooting, and Correction

Tagged magnetic resonance imaging (tMRI) has been employed for decades to measure the motion of tissue undergoing deformation. However, registration-based motion estimation from tMRI is difficult due to the periodic patterns in these images, particularly when the motion is large. With a larger motion the registration approach gets trapped in a local optima, leading to motion estimation errors. We introduce a novel "momenta, shooting, and correction" framework for Lagrangian motion estimation in the presence of repetitive patterns and large motion. This framework, grounded in Lie algebra and Lie group principles, accumulates momenta in the tangent vector space and employs exponential mapping in the diffeomorphic space for rapid approximation towards true optima, circumventing local optima. A subsequent correction step ensures convergence to true optima. The results on a 2D synthetic dataset and a real 3D tMRI dataset demonstrate our method's efficiency in estimating accurate, dense, and diffeomorphic 2D/3D motion fields amidst large motion and repetitive patterns.

* Accepted by MICCAI Workshop 2023: Time-Series Data Analytics and Learning (MTSAIL) 
Viaarxiv icon

A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

Jul 28, 2023
Junyu Chen, Yihao Liu, Shuwen Wei, Zhangxing Bian, Shalini Subramanian, Aaron Carass, Jerry L. Prince, Yong Du

Figure 1 for A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond
Figure 2 for A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond
Figure 3 for A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond
Figure 4 for A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration.

Viaarxiv icon

Learning to Evaluate the Artness of AI-generated Images

May 08, 2023
Junyu Chen, Jie An, Hanjia Lyu, Jiebo Luo

Figure 1 for Learning to Evaluate the Artness of AI-generated Images
Figure 2 for Learning to Evaluate the Artness of AI-generated Images
Figure 3 for Learning to Evaluate the Artness of AI-generated Images
Figure 4 for Learning to Evaluate the Artness of AI-generated Images

Assessing the artness of AI-generated images continues to be a challenge within the realm of image generation. Most existing metrics cannot be used to perform instance-level and reference-free artness evaluation. This paper presents ArtScore, a metric designed to evaluate the degree to which an image resembles authentic artworks by artists (or conversely photographs), thereby offering a novel approach to artness assessment. We first blend pre-trained models for photo and artwork generation, resulting in a series of mixed models. Subsequently, we utilize these mixed models to generate images exhibiting varying degrees of artness with pseudo-annotations. Each photorealistic image has a corresponding artistic counterpart and a series of interpolated images that range from realistic to artistic. This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images. Extensive experiments reveal that the artness levels predicted by ArtScore align more closely with human artistic evaluation than existing evaluation metrics, such as Gram loss and ArtFID.

Viaarxiv icon

Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning

Mar 28, 2023
Jingyang Lin, Junyu Chen, Hanjia Lyu, Igor Khodak, Divya Chhabra, Colby L Day Richardson, Irina Prelipcean, Andrew M Dylag, Jiebo Luo

Figure 1 for Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning
Figure 2 for Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning
Figure 3 for Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning
Figure 4 for Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning

Diagnosis of adverse neonatal outcomes is crucial for preterm survival since it enables doctors to provide timely treatment. Machine learning (ML) algorithms have been demonstrated to be effective in predicting adverse neonatal outcomes. However, most previous ML-based methods have only focused on predicting a single outcome, ignoring the potential correlations between different outcomes, and potentially leading to suboptimal results and overfitting issues. In this work, we first analyze the correlations between three adverse neonatal outcomes and then formulate the diagnosis of multiple neonatal outcomes as a multi-task learning (MTL) problem. We then propose an MTL framework to jointly predict multiple adverse neonatal outcomes. In particular, the MTL framework contains shared hidden layers and multiple task-specific branches. Extensive experiments have been conducted using Electronic Health Records (EHRs) from 121 preterm neonates. Empirical results demonstrate the effectiveness of the MTL framework. Furthermore, the feature importance is analyzed for each neonatal outcome, providing insights into model interpretability.

Viaarxiv icon

An investigation of licensing of datasets for machine learning based on the GQM model

Mar 24, 2023
Junyu Chen, Norihiro Yoshida, Hiroaki Takada

Figure 1 for An investigation of licensing of datasets for machine learning based on the GQM model
Figure 2 for An investigation of licensing of datasets for machine learning based on the GQM model
Figure 3 for An investigation of licensing of datasets for machine learning based on the GQM model
Figure 4 for An investigation of licensing of datasets for machine learning based on the GQM model

Dataset licensing is currently an issue in the development of machine learning systems. And in the development of machine learning systems, the most widely used are publicly available datasets. However, since the images in the publicly available dataset are mainly obtained from the Internet, some images are not commercially available. Furthermore, developers of machine learning systems do not often care about the license of the dataset when training machine learning models with it. In summary, the licensing of datasets for machine learning systems is in a state of incompleteness in all aspects at this stage. Our investigation of two collection datasets revealed that most of the current datasets lacked licenses, and the lack of licenses made it impossible to determine the commercial availability of the datasets. Therefore, we decided to take a more scientific and systematic approach to investigate the licensing of datasets and the licensing of machine learning systems that use the dataset to make it easier and more compliant for future developers of machine learning systems.

Viaarxiv icon

Deformable Cross-Attention Transformer for Medical Image Registration

Mar 10, 2023
Junyu Chen, Yihao Liu, Yufan He, Yong Du

Figure 1 for Deformable Cross-Attention Transformer for Medical Image Registration
Figure 2 for Deformable Cross-Attention Transformer for Medical Image Registration
Figure 3 for Deformable Cross-Attention Transformer for Medical Image Registration
Figure 4 for Deformable Cross-Attention Transformer for Medical Image Registration

Transformers have recently shown promise for medical image applications, leading to an increasing interest in developing such models for medical image registration. Recent advancements in designing registration Transformers have focused on using cross-attention (CA) to enable a more precise understanding of spatial correspondences between moving and fixed images. Here, we propose a novel CA mechanism that computes windowed attention using deformable windows. In contrast to existing CA mechanisms that require intensive computational complexity by either computing CA globally or locally with a fixed and expanded search window, the proposed deformable CA can selectively sample a diverse set of features over a large search window while maintaining low computational complexity. The proposed model was extensively evaluated on multi-modal, mono-modal, and atlas-to-patient registration tasks, demonstrating promising performance against state-of-the-art methods and indicating its effectiveness for medical image registration. The source code for this work will be available after publication.

Viaarxiv icon

Spatially-varying Regularization with Conditional Transformer for Unsupervised Image Registration

Mar 10, 2023
Junyu Chen, Yihao Liu, Yufan He, Yong Du

Figure 1 for Spatially-varying Regularization with Conditional Transformer for Unsupervised Image Registration
Figure 2 for Spatially-varying Regularization with Conditional Transformer for Unsupervised Image Registration
Figure 3 for Spatially-varying Regularization with Conditional Transformer for Unsupervised Image Registration
Figure 4 for Spatially-varying Regularization with Conditional Transformer for Unsupervised Image Registration

In the past, optimization-based registration models have used spatially-varying regularization to account for deformation variations in different image regions. However, deep learning-based registration models have mostly relied on spatially-invariant regularization. Here, we introduce an end-to-end framework that uses neural networks to learn a spatially-varying deformation regularizer directly from data. The hyperparameter of the proposed regularizer is conditioned into the network, enabling easy tuning of the regularization strength. The proposed method is built upon a Transformer-based model, but it can be readily adapted to any network architecture. We thoroughly evaluated the proposed approach using publicly available datasets and observed a significant performance improvement while maintaining smooth deformation. The source code of this work will be made available after publication.

Viaarxiv icon

SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images

Feb 08, 2023
Gary Y. Li, Junyu Chen, Se-In Jang, Kuang Gong, Quanzheng Li

Figure 1 for SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images
Figure 2 for SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images
Figure 3 for SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images
Figure 4 for SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images

Radiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images. Inspired by the recent success of Vision Transformers and advances in multi-modal image analysis, we propose a novel segmentation model, debuted, Cross-Modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions.To validate the effectiveness of the proposed method, we performed experiments on the HECKTOR 2021 challenge dataset and compared it with the nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based methods such as UNETR, and Swin UNETR. The proposed method is experimentally shown to outperform these comparing methods thanks to the ability of the CMA module to capture better inter-modality complimentary feature representations between PET and CT, for the task of head-and-neck tumor segmentation.

* 9 pages, 3 figures 
Viaarxiv icon

Investigation of Network Architecture for Multimodal Head-and-Neck Tumor Segmentation

Dec 21, 2022
Ye Li, Junyu Chen, Se-in Jang, Kuang Gong, Quanzheng Li

Figure 1 for Investigation of Network Architecture for Multimodal Head-and-Neck Tumor Segmentation
Figure 2 for Investigation of Network Architecture for Multimodal Head-and-Neck Tumor Segmentation
Figure 3 for Investigation of Network Architecture for Multimodal Head-and-Neck Tumor Segmentation

Inspired by the recent success of Transformers for Natural Language Processing and vision Transformer for Computer Vision, many researchers in the medical imaging community have flocked to Transformer-based networks for various main stream medical tasks such as classification, segmentation, and estimation. In this study, we analyze, two recently published Transformer-based network architectures for the task of multimodal head-and-tumor segmentation and compare their performance to the de facto standard 3D segmentation network - the nnU-Net. Our results showed that modeling long-range dependencies may be helpful in cases where large structures are present and/or large field of view is needed. However, for small structures such as head-and-neck tumor, the convolution-based U-Net architecture seemed to perform well, especially when training dataset is small and computational resource is limited.

* Accepted for oral presentation by IEEE Medical Imaging Conference 2022 
Viaarxiv icon