Social influence prediction has permeated many domains, including marketing, behavior prediction, recommendation systems, and more. However, traditional methods of predicting social influence not only require domain expertise,they also rely on extracting user features, which can be very tedious. Additionally, graph convolutional networks (GCNs), which deals with graph data in non-Euclidean space, are not directly applicable to Euclidean space. To overcome these problems, we extended DeepInf such that it can predict the social influence of COVID-19 via the transition probability of the page rank domain. Furthermore, our implementation gives rise to a deep learning-based personalized propagation algorithm, called DeepPP. The resulting algorithm combines the personalized propagation of a neural prediction model with the approximate personalized propagation of a neural prediction model from page rank analysis. Four social networks from different domains as well as two COVID-19 datasets were used to demonstrate the efficiency and effectiveness of the proposed algorithm. Compared to other baseline methods, DeepPP provides more accurate social influence predictions. Further, experiments demonstrate that DeepPP can be applied to real-world prediction data for COVID-19.
Deep learning approaches have provided state-of-the-art performance in many applications by relying on extremely large and heavily overparameterized neural networks. However, such networks have been shown to be very brittle, not generalize well to new uses cases, and are often difficult if not impossible to deploy on resources limited platforms. Model pruning, i.e., reducing the size of the network, is a widely adopted strategy that can lead to more robust and generalizable network -- usually orders of magnitude smaller with the same or even improved performance. While there exist many heuristics for model pruning, our understanding of the pruning process remains limited. Empirical studies show that some heuristics improve performance while others can make models more brittle or have other side effects. This work aims to shed light on how different pruning methods alter the network's internal feature representation, and the corresponding impact on model performance. To provide a meaningful comparison and characterization of model feature space, we use three geometric metrics that are decomposed from the common adopted classification loss. With these metrics, we design a visualization system to highlight the impact of pruning on model prediction as well as the latent feature embedding. The proposed tool provides an environment for exploring and studying differences among pruning methods and between pruned and original model. By leveraging our visualization, the ML researchers can not only identify samples that are fragile to model pruning and data corruption but also obtain insights and explanations on how some pruned models achieve superior robustness performance.
Conventional computational ghost imaging (CGI) uses light carrying a sequence of patterns with uniform-resolution to illuminate the object, then performs correlation calculation based on the light intensity value reflected by the target and the preset patterns to obtain object image. It requires a large number of measurements to obtain high-quality images, especially if high-resolution images are to be obtained. To solve this problem, we developed temporally variable-resolution illumination patterns, replacing the conventional uniform-resolution illumination patterns with a sequence of patterns of different imaging resolutions. In addition, we propose to combine temporally variable-resolution illumination patterns and spatially variable-resolution structure to develop temporally and spatially variable-resolution (TSV) illumination patterns, which not only improve the imaging quality of the region of interest (ROI) but also improve the robustness to noise. The methods using proposed illumination patterns are verified by simulations and experiments compared with CGI. For the same number of measurements, the method using temporally variable-resolution illumination patterns has better imaging quality than CGI, but it is less robust to noise. The method using TSV illumination patterns has better imaging quality in ROI than the method using temporally variable-resolution illumination patterns and CGI under the same number of measurements. We also experimentally verify that the method using TSV patterns have better imaging performance when applied to higher resolution imaging. The proposed methods are expected to solve the current computational ghost imaging that is difficult to achieve high-resolution and high-quality imaging.
The multi-dithering method has been well verified in phase locking of polarization coherent combination experiment. However, it is hard to apply to low repetition frequency pulsed lasers, since there exists an overlap frequency domain between pulse laser and the amplitude phase noise and traditional filters cannot effectively separate phase noise. Aiming to solve the problem in this paper, we propose a novel method of pulse noise detection, identification, and filtering based on the autocorrelation characteristics between noise signals. In the proposed algorithm, a self-designed window algorithm is used to identify the pulse, and then the pulse signal group in the window is replaced by interpolation, which effectively filter the pulse signal doped in the phase noise within 0.1 ms. After filtering the pulses in the phase noise, the phase difference of two pulsed beams (10 kHz) is successfully compensated to zero in 1 ms, and the coherent combination of closed-loop phase lock is realized. At the same time, the phase correction times are few, the phase lock effect is stable, and the final light intensity increases to the ideal value (0.9 Imax).
We propose a new challenging task namely IDentity Stylization (IDS) across heterogeneous domains. IDS focuses on stylizing the content identity, rather than completely swapping it using the reference identity. We use an effective heterogeneous-network-based framework $Styleverse$ that uses a single domain-aware generator to exploit the Metaverse of diverse heterogeneous faces, based on the proposed dataset FS13 with limited data. FS13 means 13 kinds of Face Styles considering diverse lighting conditions, art representations and life dimensions. Previous similar tasks, \eg, image style transfer can handle textural style transfer based on a reference image. This task usually ignores the high structure-aware facial area and high-fidelity preservation of the content. However, Styleverse intends to controllably create topology-aware faces in the Parallel Style Universe, where the source facial identity is adaptively styled via AdaIN guided by the domain-aware and reference-aware style embeddings from heterogeneous pretrained models. We first establish the IDS quantitative benchmark as well as the qualitative Styleverse matrix. Extensive experiments demonstrate that Styleverse achieves higher-fidelity identity stylization compared with other state-of-the-art methods.
Subgraph recognition aims at discovering a compressed substructure of a graph that is most informative to the graph property. It can be formulated by optimizing Graph Information Bottleneck (GIB) with a mutual information estimator. However, GIB suffers from training instability since the mutual information of graph data is intrinsically difficult to estimate. This paper introduces a noise injection method to compress the information in the subgraphs, which leads to a novel Variational Graph Information Bottleneck (VGIB) framework. VGIB allows a tractable variational approximation to its objective under mild assumptions. Therefore, VGIB enjoys more stable and efficient training process - we find that VGIB converges 10 times faster than GIB with improved performances in practice. Extensive experiments on graph interpretation, explainability of Graph Neural Networks, and graph classification show that VGIB finds better subgraphs than existing methods.
Ghost imaging (GI) is a novel imaging method, which can reconstruct the object information by the light intensity correlation measurements. However, at present, the field of view (FOV) is limited to the illuminating range of the light patterns. To enlarge FOV of GI efficiently, here we proposed the omnidirectional ghost imaging system (OGIS), which can achieve a 360{\deg} omnidirectional FOV at one shot only by adding a curved mirror. Moreover, by designing the retina-like annular patterns with log-polar patterns, OGIS can obtain unwrapping-free undistorted panoramic images with uniform resolution, which opens up a new way for the application of GI.
Ghost imaging (GI) reconstructs images using a single-pixel or bucket detector, which has the advantages of scattering robustness, wide spectrum and beyond-visual-field imaging. However, this technique needs large amount of measurements to obtain a sharp image. There have been a lot of methods proposed to overcome this disadvantage. Retina-like patterns, as one of the compressive sensing approaches, enhance the imaging quality of region of interest (ROI) while not increase measurements. The design of the retina-like patterns determines the performance of the ROI in the reconstructed image. Unlike the conventional method to fill in ROI with random patterns, we propose to optimize retina-like patterns by filling in the ROI with the patterns containing the sparsity prior of objects. This proposed method is verified by simulations and experiments compared with conventional GI, retina-like GI and GI using patterns optimized by principal component analysis. The method using optimized retina-like patterns obtain the best imaging quality in ROI than other methods. Meanwhile, the good generalization ability of the optimized retina-like pattern is also verified. While designing the size and position of the ROI of retina-like pattern, the feature information of the target can be obtained to optimize the pattern of ROI. This proposed method paves the way for realizing high-quality GI.
Single-pixel imaging, with the advantages of a wide spectrum, beyond-visual-field imaging, and robustness to light scattering, has attracted increasing attention in recent years. Fourier single-pixel imaging (FSI) can reconstruct sharp images under sub-Nyquist sampling. However, the conventional FSI has difficulty with balancing the imaging quality and efficiency. To overcome this issue, we proposed a novel approach called complementary Fourier single-pixel imaging (CFSI) to reduce measurements while retaining its robustness. The complementary nature of Fourier patterns based on a four-step phase-shift algorithm is combined with the complementary nature of a digital micromirror device. CFSI only requires two phase-shifted patterns to obtain one Fourier spectral value. Four light intensity values are obtained by load the two patterns, and the spectral value is calculated through differential measurement, which has good robustness to noise. The proposed method is verified by simulations and experiments compared with FSI based on two-, three-, and four-step phase shift algorithms. CFSI performed better than the other methods under the condition that the best imaging quality of CFSI is not reached. The reported technique provides an alternative approach to realize real-time and high-quality imaging.