Alert button
Picture for Viktor Vegh

Viktor Vegh

Alert button

Robust, fast and accurate mapping of diffusional mean kurtosis

Nov 30, 2022
Megan E. Farquhar, Qianqian Yang, Viktor Vegh

Figure 1 for Robust, fast and accurate mapping of diffusional mean kurtosis
Figure 2 for Robust, fast and accurate mapping of diffusional mean kurtosis
Figure 3 for Robust, fast and accurate mapping of diffusional mean kurtosis
Figure 4 for Robust, fast and accurate mapping of diffusional mean kurtosis

Diffusion weighted magnetic resonance imaging produces data encoded with the random motion of water molecules in biological tissues. The collection and extraction of information from such data have become critical to modern imaging studies, and particularly those focusing on neuroimaging. A range of mathematical models are routinely applied to infer tissue microstructure properties. Diffusional kurtosis imaging entails a model for measuring the extent of non-Gaussian diffusion in biological tissues. The method has seen wide assimilation across a range of clinical applications, and promises to be an increasingly important tool for clinical diagnosis, treatment planning and monitoring. However, accurate and robust estimation of kurtosis from clinically feasible data acquisitions remains a challenge. We outline a fast and robust way of estimating mean kurtosis via the sub-diffusion mathematical framework. Our kurtosis mapping method is evaluated using simulations and the Connectome 1.0 human brain data. Results show that fitting the sub-diffusion model to multiple diffusion time data and then directly calculating the mean kurtosis greatly improves the quality of the estimation. Suggestions for diffusion encoding sampling, the number of diffusion times to be acquired and the separation between them are provided. Exquisite tissue contrast is achieved even when the diffusion encoded data is collected in only minutes. Our findings suggest robust estimation of mean kurtosis can be realised within a clinically feasible diffusion weighted magnetic resonance imaging data acquisition time.

Viaarxiv icon

Instant tissue field and magnetic susceptibility mapping from MR raw phase using Laplacian enabled deep neural networks

Nov 16, 2021
Yang Gao, Zhuang Xiong, Amir Fazlollahi, Peter J Nestor, Viktor Vegh, Fatima Nasrallah, Craig Winter, G. Bruce Pike, Stuart Crozier, Feng Liu, Hongfu Sun

Figure 1 for Instant tissue field and magnetic susceptibility mapping from MR raw phase using Laplacian enabled deep neural networks
Figure 2 for Instant tissue field and magnetic susceptibility mapping from MR raw phase using Laplacian enabled deep neural networks
Figure 3 for Instant tissue field and magnetic susceptibility mapping from MR raw phase using Laplacian enabled deep neural networks
Figure 4 for Instant tissue field and magnetic susceptibility mapping from MR raw phase using Laplacian enabled deep neural networks

Quantitative susceptibility mapping (QSM) is a valuable MRI post-processing technique that quantifies the magnetic susceptibility of body tissue from phase data. However, the traditional QSM reconstruction pipeline involves multiple non-trivial steps, including phase unwrapping, background field removal, and dipole inversion. These intermediate steps not only increase the reconstruction time but amplify noise and errors. This study develops a large-stencil Laplacian preprocessed deep learning-based neural network for near instant quantitative field and susceptibility mapping (i.e., iQFM and iQSM) from raw MR phase data. The proposed iQFM and iQSM methods were compared with established reconstruction pipelines on simulated and in vivo datasets. In addition, experiments on patients with intracranial hemorrhage and multiple sclerosis were also performed to test the generalization of the novel neural networks. The proposed iQFM and iQSM methods yielded comparable results to multi-step methods in healthy subjects while dramatically improving reconstruction accuracies on intracranial hemorrhages with large susceptibilities. The reconstruction time was also substantially shortened from minutes using multi-step methods to only 30 milliseconds using the trained iQFM and iQSM neural networks.

Viaarxiv icon

Fractional order magnetic resonance fingerprinting in the human cerebral cortex

Jun 09, 2021
Viktor Vegh, Shahrzad Moinian, Qianqian Yang, David C. Reutens

Figure 1 for Fractional order magnetic resonance fingerprinting in the human cerebral cortex
Figure 2 for Fractional order magnetic resonance fingerprinting in the human cerebral cortex
Figure 3 for Fractional order magnetic resonance fingerprinting in the human cerebral cortex
Figure 4 for Fractional order magnetic resonance fingerprinting in the human cerebral cortex

Mathematical models are becoming increasingly important in magnetic resonance imaging (MRI), as they provide a mechanistic approach for making a link between tissue microstructure and signals acquired using the medical imaging instrument. The Bloch equations, which describes spin and relaxation in a magnetic field, is a set of integer order differential equations with a solution exhibiting mono-exponential behaviour in time. Parameters of the model may be estimated using a non-linear solver, or by creating a dictionary of model parameters from which MRI signals are simulated and then matched with experiment. We have previously shown the potential efficacy of a magnetic resonance fingerprinting (MRF) approach, i.e. dictionary matching based on the classical Bloch equations, for parcellating the human cerebral cortex. However, this classical model is unable to describe in full the mm-scale MRI signal generated based on an heterogenous and complex tissue micro-environment. The time-fractional order Bloch equations has been shown to provide, as a function of time, a good fit of brain MRI signals. We replaced the integer order Bloch equations with the previously reported time-fractional counterpart within the MRF framework and performed experiments to parcellate human gray matter, which is cortical brain tissue with different cyto-architecture at different spatial locations. Our findings suggest that the time-fractional order parameters, {\alpha} and {\beta}, potentially associate with the effect of interareal architectonic variability, hypothetically leading to more accurate cortical parcellation.

Viaarxiv icon

CNNs and GANs in MRI-based cross-modality medical image estimation

Jun 04, 2021
Azin Shokraei Fard, David C. Reutens, Viktor Vegh

Figure 1 for CNNs and GANs in MRI-based cross-modality medical image estimation
Figure 2 for CNNs and GANs in MRI-based cross-modality medical image estimation
Figure 3 for CNNs and GANs in MRI-based cross-modality medical image estimation
Figure 4 for CNNs and GANs in MRI-based cross-modality medical image estimation

Cross-modality image estimation involves the generation of images of one medical imaging modality from that of another modality. Convolutional neural networks (CNNs) have been shown to be useful in identifying, characterising and extracting image patterns. Generative adversarial networks (GANs) use CNNs as generators and estimated images are discriminated as true or false based on an additional network. CNNs and GANs within the image estimation framework may be considered more generally as deep learning approaches, since imaging data tends to be large, leading to a larger number of network weights. Almost all research in the CNN/GAN image estimation literature has involved the use of MRI data with the other modality primarily being PET or CT. This review provides an overview of the use of CNNs and GANs for MRI-based cross-modality medical image estimation. We outline the neural networks implemented, and detail network constructs employed for CNN and GAN image-to-image estimators. Motivations behind cross-modality image estimation are provided as well. GANs appear to provide better utility in cross-modality image estimation in comparison with CNNs, a finding drawn based on our analysis involving metrics comparing estimated and actual images. Our final remarks highlight key challenges faced by the cross-modality medical image estimation field, and suggestions for future research are outlined.

Viaarxiv icon

Linear centralization classifier

Dec 22, 2017
Mohammad Reza Bonyadi, Viktor Vegh, David C. Reutens

Figure 1 for Linear centralization classifier
Figure 2 for Linear centralization classifier
Figure 3 for Linear centralization classifier
Figure 4 for Linear centralization classifier

A classification algorithm, called the Linear Centralization Classifier (LCC), is introduced. The algorithm seeks to find a transformation that best maps instances from the feature space to a space where they concentrate towards the center of their own classes, while maximimizing the distance between class centers. We formulate the classifier as a quadratic program with quadratic constraints. We then simplify this formulation to a linear program that can be solved effectively using a linear programming solver (e.g., simplex-dual). We extend the formulation for LCC to enable the use of kernel functions for non-linear classification applications. We compare our method with two standard classification methods (support vector machine and linear discriminant analysis) and four state-of-the-art classification methods when they are applied to eight standard classification datasets. Our experimental results show that LCC is able to classify instances more accurately (based on the area under the receiver operating characteristic) in comparison to other tested methods on the chosen datasets. We also report the results for LCC with a particular kernel to solve for synthetic non-linear classification problems.

Viaarxiv icon