Alert button
Picture for Xiaohong Guan

Xiaohong Guan

Alert button

Fast Gumbel-Max Sketch and its Applications

Feb 10, 2023
Yuanming Zhang, Pinghui Wang, Yiyan Qi, Kuankuan Cheng, Junzhou Zhao, Guangjian Tian, Xiaohong Guan

Figure 1 for Fast Gumbel-Max Sketch and its Applications
Figure 2 for Fast Gumbel-Max Sketch and its Applications
Figure 3 for Fast Gumbel-Max Sketch and its Applications
Figure 4 for Fast Gumbel-Max Sketch and its Applications

The well-known Gumbel-Max Trick for sampling elements from a categorical distribution (or more generally a non-negative vector) and its variants have been widely used in areas such as machine learning and information retrieval. To sample a random element $i$ in proportion to its positive weight $v_i$, the Gumbel-Max Trick first computes a Gumbel random variable $g_i$ for each positive weight element $i$, and then samples the element $i$ with the largest value of $g_i+\ln v_i$. Recently, applications including similarity estimation and weighted cardinality estimation require to generate $k$ independent Gumbel-Max variables from high dimensional vectors. However, it is computationally expensive for a large $k$ (e.g., hundreds or even thousands) when using the traditional Gumbel-Max Trick. To solve this problem, we propose a novel algorithm, FastGM, which reduces the time complexity from $O(kn^+)$ to $O(k \ln k + n^+)$, where $n^+$ is the number of positive elements in the vector of interest. FastGM stops the procedure of Gumbel random variables computing for many elements, especially for those with small weights. We perform experiments on a variety of real-world datasets and the experimental results demonstrate that FastGM is orders of magnitude faster than state-of-the-art methods without sacrificing accuracy or incurring additional expenses.

* Accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE). arXiv admin note: text overlap with arXiv:2002.00413 
Viaarxiv icon

Federated Learning over Coupled Graphs

Jan 26, 2023
Runze Lei, Pinghui Wang, Junzhou Zhao, Lin Lan, Jing Tao, Chao Deng, Junlan Feng, Xidian Wang, Xiaohong Guan

Figure 1 for Federated Learning over Coupled Graphs
Figure 2 for Federated Learning over Coupled Graphs
Figure 3 for Federated Learning over Coupled Graphs
Figure 4 for Federated Learning over Coupled Graphs

Graphs are widely used to represent the relations among entities. When one owns the complete data, an entire graph can be easily built, therefore performing analysis on the graph is straightforward. However, in many scenarios, it is impractical to centralize the data due to data privacy concerns. An organization or party only keeps a part of the whole graph data, i.e., graph data is isolated from different parties. Recently, Federated Learning (FL) has been proposed to solve the data isolation issue, mainly for Euclidean data. It is still a challenge to apply FL on graph data because graphs contain topological information which is notorious for its non-IID nature and is hard to partition. In this work, we propose a novel FL framework for graph data, FedCog, to efficiently handle coupled graphs that are a kind of distributed graph data, but widely exist in a variety of real-world applications such as mobile carriers' communication networks and banks' transaction networks. We theoretically prove the correctness and security of FedCog. Experimental results demonstrate that our method FedCog significantly outperforms traditional FL methods on graphs. Remarkably, our FedCog improves the accuracy of node classification tasks by up to 14.7%.

* Accepted by IEEE Transactions on Parallel and Distributed Systems 
Viaarxiv icon

Uncertainty Set Prediction of Aggregated Wind Power Generation based on Bayesian LSTM and Spatio-Temporal Analysis

Oct 07, 2021
Xiaopeng Li, Jiang Wu, Zhanbo Xu, Kun Liu, Jun Yu, Xiaohong Guan

Figure 1 for Uncertainty Set Prediction of Aggregated Wind Power Generation based on Bayesian LSTM and Spatio-Temporal Analysis
Figure 2 for Uncertainty Set Prediction of Aggregated Wind Power Generation based on Bayesian LSTM and Spatio-Temporal Analysis
Figure 3 for Uncertainty Set Prediction of Aggregated Wind Power Generation based on Bayesian LSTM and Spatio-Temporal Analysis
Figure 4 for Uncertainty Set Prediction of Aggregated Wind Power Generation based on Bayesian LSTM and Spatio-Temporal Analysis

Aggregated stochastic characteristics of geographically distributed wind generation will provide valuable information for secured and economical system operation in electricity markets. This paper focuses on the uncertainty set prediction of the aggregated generation of geographically distributed wind farms. A Spatio-temporal model is proposed to learn the dynamic features from partial observation in near-surface wind fields of neighboring wind farms. We use Bayesian LSTM, a probabilistic prediction model, to obtain the uncertainty set of the generation in individual wind farms. Then, spatial correlation between different wind farms is presented to correct the output results. Numerical testing results based on the actual data with 6 wind farms in northwest China show that the uncertainty set of aggregated wind generation of distributed wind farms is less volatile than that of a single wind farm.

Viaarxiv icon

Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder

Dec 30, 2020
Yadong Zhou, Zhihao Ding, Xiaoming Liu, Chao Shen, Lingling Tong, Xiaohong Guan

Figure 1 for Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder
Figure 2 for Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder
Figure 3 for Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder
Figure 4 for Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder

Facing the sparsity of user attributes on social networks, attribute inference aims at inferring missing attributes based on existing data and additional information such as social connections between users. Recently, Variational Autoencoders (VAEs) have been successfully applied to solve the problem in a semi-supervised way. However, the latent representations learned by the encoder contain either insufficient or useless information: i) MLPs can successfully reconstruct the input data but fail in completing missing part, ii) GNNs merge information according to social connections but suffer from over-smoothing, which is a common problem with GNNs. Moreover, existing methods neglect regulating the decoder, as a result, it lacks adequate inference ability and faces severe overfitting. To address the above issues, we propose an attribute inference model based on adversarial VAE (Infer-AVAE). Our model deliberately unifies MLPs and GNNs in encoder to learn dual latent representations: one contains only the observed attributes of each user, the other converges extra information from the neighborhood. Then, an adversarial network is trained to leverage the differences between the two representations and adversarial training is conducted to guide GNNs using MLPs for robust representations. What's more, mutual information constraint is introduced in loss function to specifically train the decoder as a discriminator. Thus, it can make better use of auxiliary information in the representations for attribute inference. Based on real-world social network datasets, experimental results demonstrate that our model averagely outperforms state-of-art by 7.0% in accuracy.

Viaarxiv icon

Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding

Jul 06, 2020
Lin Lan, Pinghui Wang, Xuefeng Du, Kaikai Song, Jing Tao, Xiaohong Guan

Figure 1 for Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding
Figure 2 for Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding
Figure 3 for Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding
Figure 4 for Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding

We study the problem of node classification on graphs with few-shot novel labels, which has two distinctive properties: (1) There are novel labels to emerge in the graph; (2) The novel labels have only a few representative nodes for training a classifier. The study of this problem is instructive and corresponds to many applications such as recommendations for newly formed groups with only a few users in online social networks. To cope with this problem, we propose a novel Meta Transformed Network Embedding framework (MetaTNE), which consists of three modules: (1) A \emph{structural module} provides each node a latent representation according to the graph structure. (2) A \emph{meta-learning module} captures the relationships between the graph structure and the node labels as prior knowledge in a meta-learning manner. Additionally, we introduce an \emph{embedding transformation function} that remedies the deficiency of the straightforward use of meta-learning. Inherently, the meta-learned prior knowledge can be used to facilitate the learning of few-shot novel labels. (3) An \emph{optimization module} employs a simple yet effective scheduling strategy to train the above two modules with a balance between graph structure learning and meta-learning. Experiments on four real-world datasets show that MetaTNE brings a huge improvement over the state-of-the-art methods.

Viaarxiv icon

Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings

Jun 25, 2020
Liang Yu, Yi Sun, Zhanbo Xu, Chao Shen, Dong Yue, Tao Jiang, Xiaohong Guan

Figure 1 for Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings
Figure 2 for Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings
Figure 3 for Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings
Figure 4 for Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings

In commercial buildings, about 40%-50% of the total electricity consumption is attributed to Heating, Ventilation, and Air Conditioning (HVAC) systems, which places an economic burden on building operators. In this paper, we intend to minimize the energy cost of an HVAC system in a multi-zone commercial building under dynamic pricing with the consideration of random zone occupancy, thermal comfort, and indoor air quality comfort. Due to the existence of unknown thermal dynamics models, parameter uncertainties (e.g., outdoor temperature, electricity price, and number of occupants), spatially and temporally coupled constraints associated with indoor temperature and CO2 concentration, a large discrete solution space, and a non-convex and non-separable objective function, it is very challenging to achieve the above aim. To this end, the above energy cost minimization problem is reformulated as a Markov game. Then, an HVAC control algorithm is proposed to solve the Markov game based on multi-agent deep reinforcement learning with attention mechanism. The proposed algorithm does not require any prior knowledge of uncertain parameters and can operate without knowing building thermal dynamics models. Simulation results based on real-world traces show the effectiveness, robustness and scalability of the proposed algorithm.

* 14 pages, 21 figures 
Viaarxiv icon

Learning Graph Embedding with Limited Labeled Data: An Efficient Sampling Approach

Mar 13, 2020
Qirui Li, Xiaoming Liu, Chao Shen, Xi Peng, Yadong Zhou, Xiaohong Guan

Figure 1 for Learning Graph Embedding with Limited Labeled Data: An Efficient Sampling Approach
Figure 2 for Learning Graph Embedding with Limited Labeled Data: An Efficient Sampling Approach
Figure 3 for Learning Graph Embedding with Limited Labeled Data: An Efficient Sampling Approach
Figure 4 for Learning Graph Embedding with Limited Labeled Data: An Efficient Sampling Approach

Semi-supervised graph embedding methods represented by graph convolutional network has become one of the most popular methods for utilizing deep learning approaches to process the graph-based data for applications. Mostly existing work focus on designing novel algorithm structure to improve the performance, but ignore one common training problem, i.e., could these methods achieve the same performance with limited labelled data? To tackle this research gap, we propose a sampling-based training framework for semi-supervised graph embedding methods to achieve better performance with smaller training data set. The key idea is to integrate the sampling theory and embedding methods by a pipeline form, which has the following advantages: 1) the sampled training data can maintain more accurate graph characteristics than uniformly chosen data, which eliminates the model deviation; 2) the smaller scale of training data is beneficial to reduce the human resource cost to label them; The extensive experiments show that the sampling-based method can achieve the same performance only with 10$\%$-50$\%$ of the scale of training data. It verifies that the framework could extend the existing semi-supervised methods to the scenarios with the extremely small scale of labelled data.

Viaarxiv icon

Fast Generating A Large Number of Gumbel-Max Variables

Feb 02, 2020
Yiyan Qi, Pinghui Wang, Yuanming Zhang, Junzhou Zhao, Guangjian Tian, Xiaohong Guan

Figure 1 for Fast Generating A Large Number of Gumbel-Max Variables
Figure 2 for Fast Generating A Large Number of Gumbel-Max Variables
Figure 3 for Fast Generating A Large Number of Gumbel-Max Variables
Figure 4 for Fast Generating A Large Number of Gumbel-Max Variables

The well-known Gumbel-Max Trick for sampling elements from a categorical distribution (or more generally a nonnegative vector) and its variants have been widely used in areas such as machine learning and information retrieval. To sample a random element $i$ (or a Gumbel-Max variable $i$) in proportion to its positive weight $v_i$, the Gumbel-Max Trick first computes a Gumbel random variable $g_i$ for each positive weight element $i$, and then samples the element $i$ with the largest value of $g_i+\ln v_i$. Recently, applications including similarity estimation and graph embedding require to generate $k$ independent Gumbel-Max variables from high dimensional vectors. However, it is computationally expensive for a large $k$ (e.g., hundreds or even thousands) when using the traditional Gumbel-Max Trick. To solve this problem, we propose a novel algorithm, \emph{FastGM}, that reduces the time complexity from $O(kn^+)$ to $O(k \ln k + n^+)$, where $n^+$ is the number of positive elements in the vector of interest. Instead of computing $k$ independent Gumbel random variables directly, we find that there exists a technique to generate these variables in descending order. Using this technique, our method FastGM computes variables $g_i+\ln v_i$ for all positive elements $i$ in descending order. As a result, FastGM significantly reduces the computation time because we can stop the procedure of Gumbel random variables computing for many elements especially for those with small weights. Experiments on a variety of real-world datasets show that FastGM is orders of magnitude faster than state-of-the-art methods without sacrificing accuracy and incurring additional expenses.

* Accepted by WebConf2020 
Viaarxiv icon

Adversarial Example Detection by Classification for Deep Speech Recognition

Oct 22, 2019
Saeid Samizade, Zheng-Hua Tan, Chao Shen, Xiaohong Guan

Figure 1 for Adversarial Example Detection by Classification for Deep Speech Recognition
Figure 2 for Adversarial Example Detection by Classification for Deep Speech Recognition
Figure 3 for Adversarial Example Detection by Classification for Deep Speech Recognition
Figure 4 for Adversarial Example Detection by Classification for Deep Speech Recognition

Machine Learning systems are vulnerable to adversarial attacks and will highly likely produce incorrect outputs under these attacks. There are white-box and black-box attacks regarding to adversary's access level to the victim learning algorithm. To defend the learning systems from these attacks, existing methods in the speech domain focus on modifying input signals and testing the behaviours of speech recognizers. We, however, formulate the defense as a classification problem and present a strategy for systematically generating adversarial example datasets: one for white-box attacks and one for black-box attacks, containing both adversarial and normal examples. The white-box attack is a gradient-based method on Baidu DeepSpeech with the Mozilla Common Voice database while the black-box attack is a gradient-free method on a deep model-based keyword spotting system with the Google Speech Command dataset. The generated datasets are used to train a proposed Convolutional Neural Network (CNN), together with cepstral features, to detect adversarial examples. Experimental results show that, it is possible to accurately distinct between adversarial and normal examples for known attacks, in both single-condition and multi-condition training settings, while the performance degrades dramatically for unknown attacks. The adversarial datasets and the source code are made publicly available.

Viaarxiv icon

Meta Reinforcement Learning with Task Embedding and Shared Policy

Jun 04, 2019
Lin Lan, Zhenguo Li, Xiaohong Guan, Pinghui Wang

Figure 1 for Meta Reinforcement Learning with Task Embedding and Shared Policy
Figure 2 for Meta Reinforcement Learning with Task Embedding and Shared Policy
Figure 3 for Meta Reinforcement Learning with Task Embedding and Shared Policy
Figure 4 for Meta Reinforcement Learning with Task Embedding and Shared Policy

Despite significant progress, deep reinforcement learning (RL) suffers from data-inefficiency and limited generalization. Recent efforts apply meta-learning to learn a meta-learner from a set of RL tasks such that a novel but related task could be solved quickly. Though specific in some ways, different tasks in meta-RL are generally similar at a high level. However, most meta-RL methods do not explicitly and adequately model the specific and shared information among different tasks, which limits their ability to learn training tasks and to generalize to novel tasks. In this paper, we propose to capture the shared information on the one hand and meta-learn how to quickly abstract the specific information about a task on the other hand. Methodologically, we train an SGD meta-learner to quickly optimize a task encoder for each task, which generates a task embedding based on past experience. Meanwhile, we learn a policy which is shared across all tasks and conditioned on task embeddings. Empirical results on four simulated tasks demonstrate that our method has better learning capacity on both training and novel tasks and attains up to 3 to 4 times higher returns compared to baselines.

* Accepted to IJCAI 2019 
Viaarxiv icon