Alert button
Picture for Jingyi Shen

Jingyi Shen

Alert button

PSRFlow: Probabilistic Super Resolution with Flow-Based Models for Scientific Data

Aug 08, 2023
Jingyi Shen, Han-Wei Shen

Figure 1 for PSRFlow: Probabilistic Super Resolution with Flow-Based Models for Scientific Data
Figure 2 for PSRFlow: Probabilistic Super Resolution with Flow-Based Models for Scientific Data
Figure 3 for PSRFlow: Probabilistic Super Resolution with Flow-Based Models for Scientific Data
Figure 4 for PSRFlow: Probabilistic Super Resolution with Flow-Based Models for Scientific Data

Although many deep-learning-based super-resolution approaches have been proposed in recent years, because no ground truth is available in the inference stage, few can quantify the errors and uncertainties of the super-resolved results. For scientific visualization applications, however, conveying uncertainties of the results to scientists is crucial to avoid generating misleading or incorrect information. In this paper, we propose PSRFlow, a novel normalizing flow-based generative model for scientific data super-resolution that incorporates uncertainty quantification into the super-resolution process. PSRFlow learns the conditional distribution of the high-resolution data based on the low-resolution counterpart. By sampling from a Gaussian latent space that captures the missing information in the high-resolution data, one can generate different plausible super-resolution outputs. The efficient sampling in the Gaussian latent space allows our model to perform uncertainty quantification for the super-resolved results. During model training, we augment the training data with samples across various scales to make the model adaptable to data of different scales, achieving flexible super-resolution for a given input. Our results demonstrate superior performance and robust uncertainty quantification compared with existing methods such as interpolation and GAN-based super-resolution networks.

* To be published in Proc. IEEE VIS 2023 
Viaarxiv icon

IDLat: An Importance-Driven Latent Generation Method for Scientific Data

Aug 05, 2022
Jingyi Shen, Haoyu Li, Jiayi Xu, Ayan Biswas, Han-Wei Shen

Figure 1 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Figure 2 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Figure 3 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Figure 4 for IDLat: An Importance-Driven Latent Generation Method for Scientific Data

Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.

* 11 pages, 12 figures, Proc. IEEE VIS 2022 
Viaarxiv icon

An Information-theoretic Visual Analysis Framework for Convolutional Neural Networks

May 02, 2020
Jingyi Shen, Han-Wei Shen

Figure 1 for An Information-theoretic Visual Analysis Framework for Convolutional Neural Networks
Figure 2 for An Information-theoretic Visual Analysis Framework for Convolutional Neural Networks
Figure 3 for An Information-theoretic Visual Analysis Framework for Convolutional Neural Networks
Figure 4 for An Information-theoretic Visual Analysis Framework for Convolutional Neural Networks

Despite the great success of Convolutional Neural Networks (CNNs) in Computer Vision and Natural Language Processing, the working mechanism behind CNNs is still under extensive discussions and research. Driven by a strong demand for the theoretical explanation of neural networks, some researchers utilize information theory to provide insight into the black box model. However, to the best of our knowledge, employing information theory to quantitatively analyze and qualitatively visualize neural networks has not been extensively studied in the visualization community. In this paper, we combine information entropies and visualization techniques to shed light on how CNN works. Specifically, we first introduce a data model to organize the data that can be extracted from CNN models. Then we propose two ways to calculate entropy under different circumstances. To provide a fundamental understanding of the basic building blocks of CNNs (e.g., convolutional layers, pooling layers, normalization layers) from an information-theoretic perspective, we develop a visual analysis system, CNNSlicer. CNNSlicer allows users to interactively explore the amount of information changes inside the model. With case studies on the widely used benchmark datasets (MNIST and CIFAR-10), we demonstrate the effectiveness of our system in opening the blackbox of CNNs.

Viaarxiv icon