Alert button
Picture for Jun Huan

Jun Huan

Alert button

Random Walk on Multiple Networks

Jul 04, 2023
Dongsheng Luo, Yuchen Bian, Yaowei Yan, Xiong Yu, Jun Huan, Xiao Liu, Xiang Zhang

Figure 1 for Random Walk on Multiple Networks
Figure 2 for Random Walk on Multiple Networks
Figure 3 for Random Walk on Multiple Networks
Figure 4 for Random Walk on Multiple Networks

Random Walk is a basic algorithm to explore the structure of networks, which can be used in many tasks, such as local community detection and network embedding. Existing random walk methods are based on single networks that contain limited information. In contrast, real data often contain entities with different types or/and from different sources, which are comprehensive and can be better modeled by multiple networks. To take advantage of rich information in multiple networks and make better inferences on entities, in this study, we propose random walk on multiple networks, RWM. RWM is flexible and supports both multiplex networks and general multiple networks, which may form many-to-many node mappings between networks. RWM sends a random walker on each network to obtain the local proximity (i.e., node visiting probabilities) w.r.t. the starting nodes. Walkers with similar visiting probabilities reinforce each other. We theoretically analyze the convergence properties of RWM. Two approximation methods with theoretical performance guarantees are proposed for efficient computation. We apply RWM in link prediction, network embedding, and local community detection. Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the effectiveness and efficiency of RWM.

* Accepted to IEEE TKDE 
Viaarxiv icon

Temporal Output Discrepancy for Loss Estimation-based Active Learning

Dec 20, 2022
Siyu Huang, Tianyang Wang, Haoyi Xiong, Bihan Wen, Jun Huan, Dejing Dou

Figure 1 for Temporal Output Discrepancy for Loss Estimation-based Active Learning
Figure 2 for Temporal Output Discrepancy for Loss Estimation-based Active Learning
Figure 3 for Temporal Output Discrepancy for Loss Estimation-based Active Learning
Figure 4 for Temporal Output Discrepancy for Loss Estimation-based Active Learning

While deep learning succeeds in a wide range of tasks, it highly depends on the massive collection of annotated data which is expensive and time-consuming. To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset. Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this paper we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss. The core of our approach is a measurement Temporal Output Discrepancy (TOD) that estimates the sample loss by evaluating the discrepancy of outputs given by models at different optimization steps. Our theoretical investigation shows that TOD lower-bounds the accumulated sample loss thus it can be used to select informative unlabeled samples. On basis of TOD, we further develop an effective unlabeled data sampling strategy as well as an unsupervised learning criterion for active learning. Due to the simplicity of TOD, our methods are efficient, flexible, and task-agnostic. Extensive experimental results demonstrate that our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks. In addition, we show that TOD can be utilized to select the best model of potentially the highest testing accuracy from a pool of candidate models.

* Accepted for IEEE Transactions on Neural Networks and Learning Systems, 2022. Journal extension of ICCV 2021 [arXiv:2107.14153] 
Viaarxiv icon

Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms

Jul 19, 2022
Linbo Liu, Youngsuk Park, Trong Nghia Hoang, Hilaf Hasson, Jun Huan

Figure 1 for Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
Figure 2 for Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
Figure 3 for Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
Figure 4 for Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms

As deep learning models have gradually become the main workhorse of time series forecasting, the potential vulnerability under adversarial attacks to forecasting and decision system accordingly has emerged as a main issue in recent years. Albeit such behaviors and defense mechanisms started to be investigated for the univariate time series forecasting, there are still few studies regarding the multivariate forecasting which is often preferred due to its capacity to encode correlations between different time series. In this work, we study and design adversarial attack on multivariate probabilistic forecasting models, taking into consideration attack budget constraints and the correlation architecture between multiple time series. Specifically, we investigate a sparse indirect attack that hurts the prediction of an item (time series) by only attacking the history of a small number of other items to save attacking cost. In order to combat these attacks, we also develop two defense strategies. First, we adopt randomized smoothing to multivariate time series scenario and verify its effectiveness via empirical experiments. Second, we leverage a sparse attacker to enable end-to-end adversarial training that delivers robust probabilistic forecasters. Extensive experiments on real dataset confirm that our attack schemes are powerful and our defend algorithms are more effective compared with other baseline defense mechanisms.

Viaarxiv icon

Semi-Supervised Active Learning with Temporal Output Discrepancy

Jul 29, 2021
Siyu Huang, Tianyang Wang, Haoyi Xiong, Jun Huan, Dejing Dou

Figure 1 for Semi-Supervised Active Learning with Temporal Output Discrepancy
Figure 2 for Semi-Supervised Active Learning with Temporal Output Discrepancy
Figure 3 for Semi-Supervised Active Learning with Temporal Output Discrepancy
Figure 4 for Semi-Supervised Active Learning with Temporal Output Discrepancy

While deep learning succeeds in a wide range of tasks, it highly depends on the massive collection of annotated data which is expensive and time-consuming. To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset. Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this paper we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss. The core of our approach is a measurement Temporal Output Discrepancy (TOD) that estimates the sample loss by evaluating the discrepancy of outputs given by models at different optimization steps. Our theoretical investigation shows that TOD lower-bounds the accumulated sample loss thus it can be used to select informative unlabeled samples. On basis of TOD, we further develop an effective unlabeled data sampling strategy as well as an unsupervised learning criterion that enhances model performance by incorporating the unlabeled data. Due to the simplicity of TOD, our active learning approach is efficient, flexible, and task-agnostic. Extensive experimental results demonstrate that our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.

* ICCV 2021. Code is available at https://github.com/siyuhuang/TOD 
Viaarxiv icon

Attentive Social Recommendation: Towards User And Item Diversities

Nov 15, 2020
Dongsheng Luo, Yuchen Bian, Xiang Zhang, Jun Huan

Figure 1 for Attentive Social Recommendation: Towards User And Item Diversities
Figure 2 for Attentive Social Recommendation: Towards User And Item Diversities
Figure 3 for Attentive Social Recommendation: Towards User And Item Diversities
Figure 4 for Attentive Social Recommendation: Towards User And Item Diversities

Social recommendation system is to predict unobserved user-item rating values by taking advantage of user-user social relation and user-item ratings. However, user/item diversities in social recommendations are not well utilized in the literature. Especially, inter-factor (social and rating factors) relations and distinct rating values need taking into more consideration. In this paper, we propose an attentive social recommendation system (ASR) to address this issue from two aspects. First, in ASR, Rec-conv graph network layers are proposed to extract the social factor, user-rating and item-rated factors and then automatically assign contribution weights to aggregate these factors into the user/item embedding vectors. Second, a disentangling strategy is applied for diverse rating values. Extensive experiments on benchmarks demonstrate the effectiveness and advantages of our ASR.

* 8 Pages 
Viaarxiv icon

Generating Person Images with Appearance-aware Pose Stylizer

Jul 17, 2020
Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou

Figure 1 for Generating Person Images with Appearance-aware Pose Stylizer
Figure 2 for Generating Person Images with Appearance-aware Pose Stylizer
Figure 3 for Generating Person Images with Appearance-aware Pose Stylizer
Figure 4 for Generating Person Images with Appearance-aware Pose Stylizer

Generation of high-quality person images is challenging, due to the sophisticated entanglements among image factors, e.g., appearance, pose, foreground, background, local details, global structures, etc. In this paper, we present a novel end-to-end framework to generate realistic person images based on given person poses and appearances. The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively. The framework is highly flexible and controllable by effectively decoupling various complex person image factors in the encoding phase, followed by re-coupling them in the decoding phase. In addition, we present a new normalization method named adaptive patch normalization, which enables region-specific normalization and shows a good performance when adopted in person image generation model. Experiments on two benchmark datasets show that our method is capable of generating visually appealing and realistic-looking results using arbitrary image and pose inputs.

* Appearing at IJCAI 2020. The code is available at https://github.com/siyuhuang/PoseStylizer 
Viaarxiv icon

Parameter-Free Style Projection for Arbitrary Style Transfer

Mar 17, 2020
Siyu Huang, Haoyi Xiong, Tianyang Wang, Qingzhong Wang, Zeyu Chen, Jun Huan, Dejing Dou

Figure 1 for Parameter-Free Style Projection for Arbitrary Style Transfer
Figure 2 for Parameter-Free Style Projection for Arbitrary Style Transfer
Figure 3 for Parameter-Free Style Projection for Arbitrary Style Transfer
Figure 4 for Parameter-Free Style Projection for Arbitrary Style Transfer

Arbitrary image style transfer is a challenging task which aims to stylize a content image conditioned on an arbitrary style image. In this task the content-style feature transformation is a critical component for a proper fusion of features. Existing feature transformation algorithms often suffer from unstable learning, loss of content and style details, and non-natural stroke patterns. To mitigate these issues, this paper proposes a parameter-free algorithm, Style Projection, for fast yet effective content-style transformation. To leverage the proposed Style Projection~component, this paper further presents a real-time feed-forward model for arbitrary style transfer, including a regularization for matching the content semantics between inputs and outputs. Extensive experiments have demonstrated the effectiveness and efficiency of the proposed method in terms of qualitative analysis, quantitative evaluation, and user study.

* 9 pages, 12 figures 
Viaarxiv icon

Ultrafast Photorealistic Style Transfer via Neural Architecture Search

Dec 05, 2019
Jie An, Haoyi Xiong, Jun Huan, Jiebo Luo

Figure 1 for Ultrafast Photorealistic Style Transfer via Neural Architecture Search
Figure 2 for Ultrafast Photorealistic Style Transfer via Neural Architecture Search
Figure 3 for Ultrafast Photorealistic Style Transfer via Neural Architecture Search
Figure 4 for Ultrafast Photorealistic Style Transfer via Neural Architecture Search

The key challenge in photorealistic style transfer is that an algorithm should faithfully transfer the style of a reference photo to a content photo while the generated image should look like one captured by a camera. Although several photorealistic style transfer algorithms have been proposed, they need to rely on post- and/or pre-processing to make the generated images look photorealistic. If we disable the additional processing, these algorithms would fail to produce plausible photorealistic stylization in terms of detail preservation and photorealism. In this work, we propose an effective solution to these issues. Our method consists of a construction step (C-step) to build a photorealistic stylization network and a pruning step (P-step) for acceleration. In the C-step, we propose a dense auto-encoder named PhotoNet based on a carefully designed pre-analysis. PhotoNet integrates a feature aggregation module (BFA) and instance normalized skip links (INSL). To generate faithful stylization, we introduce multiple style transfer modules in the decoder and INSLs. PhotoNet significantly outperforms existing algorithms in terms of both efficiency and effectiveness. In the P-step, we adopt a neural architecture search method to accelerate PhotoNet. We propose an automatic network pruning framework in the manner of teacher-student learning for photorealistic stylization. The network architecture named PhotoNAS resulted from the search achieves significant acceleration over PhotoNet while keeping the stylization effects almost intact. We conduct extensive experiments on both image and video transfer. The results show that our method can produce favorable results while achieving 20-30 times acceleration in comparison with the existing state-of-the-art approaches. It is worth noting that the proposed algorithm accomplishes better performance without any pre- or post-processing.

Viaarxiv icon

SecureGBM: Secure Multi-Party Gradient Boosting

Nov 27, 2019
Zhi Fengy, Haoyi Xiong, Chuanyuan Song, Sijia Yang, Baoxin Zhao, Licheng Wang, Zeyu Chen, Shengwen Yang, Liping Liu, Jun Huan

Figure 1 for SecureGBM: Secure Multi-Party Gradient Boosting
Figure 2 for SecureGBM: Secure Multi-Party Gradient Boosting
Figure 3 for SecureGBM: Secure Multi-Party Gradient Boosting
Figure 4 for SecureGBM: Secure Multi-Party Gradient Boosting

Federated machine learning systems have been widely used to facilitate the joint data analytics across the distributed datasets owned by the different parties that do not trust each others. In this paper, we proposed a novel Gradient Boosting Machines (GBM) framework SecureGBM built-up with a multi-party computation model based on semi-homomorphic encryption, where every involved party can jointly obtain a shared Gradient Boosting machines model while protecting their own data from the potential privacy leakage and inferential identification. More specific, our work focused on a specific "dual--party" secure learning scenario based on two parties -- both party own an unique view (i.e., attributes or features) to the sample group of samples while only one party owns the labels. In such scenario, feature and label data are not allowed to share with others. To achieve the above goal, we firstly extent -- LightGBM -- a well known implementation of tree-based GBM through covering its key operations for training and inference with SEAL homomorphic encryption schemes. However, the performance of such re-implementation is significantly bottle-necked by the explosive inflation of the communication payloads, based on ciphertexts subject to the increasing length of plaintexts. In this way, we then proposed to use stochastic approximation techniques to reduced the communication payloads while accelerating the overall training procedure in a statistical manner. Our experiments using the real-world data showed that SecureGBM can well secure the communication and computation of LightGBM training and inference procedures for the both parties while only losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets.

* The first two authors contributed equally to the manuscript. The paper has been accepted for publication in IEEE BigData 2019 
Viaarxiv icon