Abstract:This study aims to investigate the effect of various beam geometries and dimensions of input data on the sparse-sampling streak artifact correction task with U-Nets for clinical CT scans as a means of incorporating the volumetric context into artifact reduction tasks to improve model performance. A total of 22 subjects were retrospectively selected (01.2016-12.2018) from the Technical University of Munich's research hospital, TUM Klinikum rechts der Isar. Sparsely-sampled CT volumes were simulated with the Astra toolbox for parallel, fan, and cone beam geometries. 2048 views were taken as full-view scans. 2D and 3D U-Nets were trained and validated on 14, and tested on 8 subjects, respectively. For the dimensionality study, in addition to the 512x512 2D CT images, the CT scans were further pre-processed to generate a so-called '2.5D', and 3D data: Each CT volume was divided into 64x64x64 voxel blocks. The 3D data refers to individual 64-voxel blocks. An axial, coronal, and sagittal cut through the center of each block resulted in three 64x64 2D patches that were rearranged as a single 64x64x3 image, proposed as 2.5D data. Model performance was assessed with the mean squared error (MSE) and structural similarity index measure (SSIM). For all geometries, the 2D U-Net trained on axial 2D slices results in the best MSE and SSIM values, outperforming the 2.5D and 3D input data dimensions.
Abstract:Tensor networks have recently found applications in machine learning for both supervised learning and unsupervised learning. The most common approaches for training these models are gradient descent methods. In this work, we consider an alternative training scheme utilizing basic tensor network operations, e.g., summation and compression. The training algorithm is based on compressing the superposition state constructed from all the training data in product state representation. The algorithm could be parallelized easily and only iterates through the dataset once. Hence, it serves as a pre-training algorithm. We benchmark the algorithm on the MNIST dataset and show reasonable results for generating new images and classification tasks. Furthermore, we provide an interpretation of the algorithm as a compressed quantum kernel density estimation for the probability amplitude of input data.