Alert button
Picture for Guang Cheng

Guang Cheng

Alert button

Bootstrapping Upper Confidence Bound

Jun 12, 2019
Botao Hao, Yasin Abbasi-Yadkori, Zheng Wen, Guang Cheng

Figure 1 for Bootstrapping Upper Confidence Bound
Figure 2 for Bootstrapping Upper Confidence Bound
Figure 3 for Bootstrapping Upper Confidence Bound
Figure 4 for Bootstrapping Upper Confidence Bound

Upper Confidence Bound (UCB) method is arguably the most celebrated one used in online decision making with partial information feedback. Existing techniques for constructing confidence bounds are typically built upon various concentration inequalities, which thus lead to over-exploration. In this paper, we propose a non-parametric and data-dependent UCB algorithm based on the multiplier bootstrap. To improve its finite sample performance, we further incorporate second-order correction into the above construction. In theory, we derive both problem-dependent and problem-independent regret bounds for multi-armed bandits under a much weaker tail assumption than the standard sub-Gaussianity. Numerical results demonstrate significant regret reductions by our method, in comparison with several baselines in a range of multi-armed and linear bandit problems.

Viaarxiv icon

An Efficient Network Intrusion Detection System Based on Feature Selection and Ensemble Classifier

Apr 02, 2019
Yu-Yang Zhou, Guang Cheng

Figure 1 for An Efficient Network Intrusion Detection System Based on Feature Selection and Ensemble Classifier
Figure 2 for An Efficient Network Intrusion Detection System Based on Feature Selection and Ensemble Classifier
Figure 3 for An Efficient Network Intrusion Detection System Based on Feature Selection and Ensemble Classifier
Figure 4 for An Efficient Network Intrusion Detection System Based on Feature Selection and Ensemble Classifier

Since Internet is so popular and prevailing in human life, countering cyber threats, especially attack detection, is a challenging area of research in the field of cyber security. Intrusion detection systems (IDSs) are essential entities in a network topology aiming to safeguard the integrity and availability of sensitive assets in the protected systems. Although many supervised and unsupervised learning approaches from the field of machine learning and pattern recognition have been used to increase the efficacy of IDSs, it is still a problem to deal with lots of redundant and irrelevant features in high-dimension datasets for network anomaly detection. To this end, we propose a novel methodology combining the benefits of correlation-based feature selection(CFS) and bat algorithm(BA) with an ensemble classifier based on C4.5, Random Forest(RF), and Forest by Penalizing Attributes(Forest PA), which can be able to classify both common and rare types of attacks with high accuracy and efficiency. The experimental results, using a novel intrusion detection dataset, namely CIC-IDS2017, reveal that our CFS-BA-Ensemble method is able to contribute more critical features and significantly outperforms individual approaches, achieving high accuracy and low false alarm rate. Moreover, compared with the majority of the existing state-of-the-art and legacy techniques, our approach exhibits better performance under several classification metrics in the context of classification accuracy, f-measure, attack detection rate, and false alarm rate.

Viaarxiv icon

Stein Neural Sampler

Oct 08, 2018
Tianyang Hu, Zixiang Chen, Hanxi Sun, Jincheng Bai, Mao Ye, Guang Cheng

Figure 1 for Stein Neural Sampler
Figure 2 for Stein Neural Sampler
Figure 3 for Stein Neural Sampler
Figure 4 for Stein Neural Sampler

We propose two novel samplers to produce high-quality samples from a given (un-normalized) probability density. The sampling is achieved by transforming a reference distribution to the target distribution with neural networks, which are trained separately by minimizing two kinds of Stein Discrepancies, and hence our method is named as Stein neural sampler. Theoretical and empirical results suggest that, compared with traditional sampling schemes, our samplers share the following three advantages: 1. Being asymptotically correct; 2. Experiencing less convergence issue in practice; 3. Generating samples instantaneously.

Viaarxiv icon

Early Stopping for Nonparametric Testing

Sep 17, 2018
Meimei Liu, Guang Cheng

Figure 1 for Early Stopping for Nonparametric Testing
Figure 2 for Early Stopping for Nonparametric Testing
Figure 3 for Early Stopping for Nonparametric Testing

Early stopping of iterative algorithms is an algorithmic regularization method to avoid over-fitting in estimation and classification. In this paper, we show that early stopping can also be applied to obtain the minimax optimal testing in a general non-parametric setup. Specifically, a Wald-type test statistic is obtained based on an iterated estimate produced by functional gradient descent algorithms in a reproducing kernel Hilbert space. A notable contribution is to establish a "sharp" stopping rule: when the number of iterations achieves an optimal order, testing optimality is achievable; otherwise, testing optimality becomes impossible. As a by-product, a similar sharpness result is also derived for minimax optimal estimation under early stopping studied in [11] and [19]. All obtained results hold for various kernel classes, including Sobolev smoothness classes and Gaussian kernel classes.

* To appear in NIPS 2018 
Viaarxiv icon

How Many Machines Can We Use in Parallel Computing for Kernel Ridge Regression?

Sep 17, 2018
Meimei Liu, Zuofeng Shang, Guang Cheng

Figure 1 for How Many Machines Can We Use in Parallel Computing for Kernel Ridge Regression?
Figure 2 for How Many Machines Can We Use in Parallel Computing for Kernel Ridge Regression?
Figure 3 for How Many Machines Can We Use in Parallel Computing for Kernel Ridge Regression?

This paper attempts to solve a basic problem in distributed statistical inference: how many machines can we use in parallel computing? In kernel ridge regression, we address this question in two important settings: nonparametric estimation and hypothesis testing. Specifically, we find a range for the number of machines under which optimal estimation/testing is achievable. The employed empirical processes method provides a unified framework, that allows us to handle various regression problems (such as thin-plate splines and nonparametric additive regression) under different settings (such as univariate, multivariate and diverging-dimensional designs). It is worth noting that the upper bounds of the number of machines are proven to be un-improvable (up to a logarithmic factor) in two important cases: smoothing spline regression and Gaussian RKHS regression. Our theoretical findings are backed by thorough numerical studies.

* This work extends the work in arXiv:1512.09226 to random and multivariate design 
Viaarxiv icon

Statistically and Computationally Efficient Variance Estimator for Kernel Ridge Regression

Sep 17, 2018
Meimei Liu, Jean Honorio, Guang Cheng

Figure 1 for Statistically and Computationally Efficient Variance Estimator for Kernel Ridge Regression
Figure 2 for Statistically and Computationally Efficient Variance Estimator for Kernel Ridge Regression
Figure 3 for Statistically and Computationally Efficient Variance Estimator for Kernel Ridge Regression
Figure 4 for Statistically and Computationally Efficient Variance Estimator for Kernel Ridge Regression

In this paper, we propose a random projection approach to estimate variance in kernel ridge regression. Our approach leads to a consistent estimator of the true variance, while being computationally more efficient. Our variance estimator is optimal for a large family of kernels, including cubic splines and Gaussian kernels. Simulation analysis is conducted to support our theory.

* To Appear in 2018 Allerton 
Viaarxiv icon

Nonparametric Testing under Random Projection

Feb 17, 2018
Meimei Liu, Zuofeng Shang, Guang Cheng

Figure 1 for Nonparametric Testing under Random Projection
Figure 2 for Nonparametric Testing under Random Projection
Figure 3 for Nonparametric Testing under Random Projection
Figure 4 for Nonparametric Testing under Random Projection

A common challenge in nonparametric inference is its high computational complexity when data volume is large. In this paper, we develop computationally efficient nonparametric testing by employing a random projection strategy. In the specific kernel ridge regression setup, a simple distance-based test statistic is proposed. Notably, we derive the minimum number of random projections that is sufficient for achieving testing optimality in terms of the minimax rate. An adaptive testing procedure is further established without prior knowledge of regularity. One technical contribution is to establish upper bounds for a range of tail sums of empirical kernel eigenvalues. Simulations and real data analysis are conducted to support our theory.

Viaarxiv icon

Sparse and Low-rank Tensor Estimation via Cubic Sketchings

Jan 29, 2018
Botao Hao, Anru Zhang, Guang Cheng

Figure 1 for Sparse and Low-rank Tensor Estimation via Cubic Sketchings
Figure 2 for Sparse and Low-rank Tensor Estimation via Cubic Sketchings
Figure 3 for Sparse and Low-rank Tensor Estimation via Cubic Sketchings
Figure 4 for Sparse and Low-rank Tensor Estimation via Cubic Sketchings

In this paper, we propose a general framework for sparse and low-rank tensor estimation from cubic sketchings. A two-stage non-convex implementation is developed based on sparse tensor decomposition and thresholded gradient descent, which ensures exact recovery in the noiseless case and stable recovery in the noisy case with high probability. The non-asymptotic analysis sheds light on an interplay between optimization error and statistical error. The proposed procedure is shown to be rate-optimal under certain conditions. As a technical by-product, novel high-order concentration inequalities are derived for studying high-moment sub-Gaussian tensors. An interesting tensor formulation illustrates the potential application to high-order interaction pursuit in high-dimensional linear regression.

Viaarxiv icon

Simultaneous Clustering and Estimation of Heterogeneous Graphical Models

Jan 12, 2018
Botao Hao, Will Wei Sun, Yufeng Liu, Guang Cheng

Figure 1 for Simultaneous Clustering and Estimation of Heterogeneous Graphical Models
Figure 2 for Simultaneous Clustering and Estimation of Heterogeneous Graphical Models
Figure 3 for Simultaneous Clustering and Estimation of Heterogeneous Graphical Models
Figure 4 for Simultaneous Clustering and Estimation of Heterogeneous Graphical Models

We consider joint estimation of multiple graphical models arising from heterogeneous and high-dimensional observations. Unlike most previous approaches which assume that the cluster structure is given in advance, an appealing feature of our method is to learn cluster structure while estimating heterogeneous graphical models. This is achieved via a high dimensional version of Expectation Conditional Maximization (ECM) algorithm (Meng and Rubin, 1993). A joint graphical lasso penalty is imposed on the conditional maximization step to extract both homogeneity and heterogeneity components across all clusters. Our algorithm is computationally efficient due to fast sparse learning routines and can be implemented without unsupervised learning knowledge. The superior performance of our method is demonstrated by extensive experiments and its application to a Glioblastoma cancer dataset reveals some new insights in understanding the Glioblastoma cancer. In theory, a non-asymptotic error bound is established for the output directly from our high dimensional ECM algorithm, and it consists of two quantities: statistical error (statistical accuracy) and optimization error (computational complexity). Such a result gives a theoretical guideline in terminating our ECM iterations.

* 61 pages. Accepted by Journal of Machine Learning Research 
Viaarxiv icon

Stability Enhanced Large-Margin Classifier Selection

Jan 20, 2017
Will Wei Sun, Guang Cheng, Yufeng Liu

Figure 1 for Stability Enhanced Large-Margin Classifier Selection
Figure 2 for Stability Enhanced Large-Margin Classifier Selection
Figure 3 for Stability Enhanced Large-Margin Classifier Selection
Figure 4 for Stability Enhanced Large-Margin Classifier Selection

Stability is an important aspect of a classification procedure because unstable predictions can potentially reduce users' trust in a classification system and also harm the reproducibility of scientific conclusions. The major goal of our work is to introduce a novel concept of classification instability, i.e., decision boundary instability (DBI), and incorporate it with the generalization error (GE) as a standard for selecting the most accurate and stable classifier. Specifically, we implement a two-stage algorithm: (i) initially select a subset of classifiers whose estimated GEs are not significantly different from the minimal estimated GE among all the candidate classifiers; (ii) the optimal classifier is chosen as the one achieving the minimal DBI among the subset selected in stage (i). This general selection principle applies to both linear and nonlinear classifiers. Large-margin classifiers are used as a prototypical example to illustrate the above idea. Our selection method is shown to be consistent in the sense that the optimal classifier simultaneously achieves the minimal GE and the minimal DBI. Various simulations and real examples further demonstrate the advantage of our method over several alternative approaches.

* 38 pages. To appear in Statistica Sinica 
Viaarxiv icon