Image restoration is a typical ill-posed problem, and it contains various tasks. In the medical imaging field, an ill-posed image interrupts diagnosis and even following image processing. Both traditional iterative and up-to-date deep networks have attracted much attention and obtained a significant improvement in reconstructing satisfying images. This study combines their advantages into one unified mathematical model and proposes a general image restoration strategy to deal with such problems. This strategy consists of two modules. First, a novel generative adversarial net(GAN) with WGAN-GP training is built to recover image structures and subtle details. Then, a deep iteration module promotes image quality with a combination of pre-trained deep networks and compressed sensing algorithms by ADMM optimization. (D)eep (I)teration module suppresses image artifacts and further recovers subtle image details, (A)ssisted by (M)ulti-level (O)bey-pixel feature extraction networks (D)iscriminator to recover general structures. Therefore, the proposed strategy is named DIAMOND.
Tomographic image reconstruction with deep learning is an emerging field of applied artificial intelligence but a recent study reveals that deep reconstruction networks, such as well-known AUTOMAP, are unstable for computed tomography (CT) and magnetic resonance imaging (MRI). Specifically, three kinds of instabilities were identified: (1) strong output artefacts from tiny perturbation, (2) poor detection of small features, and (3) decreased performance with increased input data. These instabilities are believed to be from lacking kernel awareness and nontrivial to overcome, but compressed sensing (CS) reconstruction was reported to be stable due to its kernel awareness. Since deep reconstruction may potentially become the main driving force to achieve better image quality, stabilizing deep reconstruction networks is an urgent challenge. Here we propose an Analytic, Compressive, Iterative Deep (ACID) network to fundamentally address this challenge. Instead of only using deep learning or compressed sensing, ACID consists of four modules including deep reconstruction, CS, analytic mapping, and iterative refinement. In our experiments, ACID eliminated all three kinds of instabilities and significantly improved image quality relative to the methods in the aforementioned PNAS study. ACID is only an example of integrating diverse algorithmic ingredients but it has clearly demonstrated that data-driven reconstruction can be stabilized to outperform reconstruction using CS alone. The power of ACID comes from a unique combination of a deep reconstruction network trained on big data, CS via advanced optimization, and iterative refinement that stabilizes the whole workflow. We anticipate that this integrative closed-loop data driven approach will add great value to clinical and other applications.