Alert button
Picture for Yuejie Chi

Yuejie Chi

Alert button

Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization

Jul 30, 2020
Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, Yuejie Chi

Figure 1 for Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization
Figure 2 for Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization

Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms in contemporary reinforcement learning. This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that helps encourage exploration -- and is closely related to soft policy iteration and trust region policy optimization. Despite the empirical success, the theoretical underpinnings for NPG methods remain severely limited even for the tabular setting. This paper develops $\textit{non-asymptotic}$ convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on discounted Markov decision processes (MDPs). Assuming access to exact policy evaluation, we demonstrate that the algorithm converges linearly -- or even quadratically once it enters a local region around the optimal policy -- when computing optimal value functions of the regularized MDP. Moreover, the algorithm is provably stable vis-\`a-vis inexactness of policy evaluation, and is able to find an $\epsilon$-optimal policy for the original MDP when applied to a slightly perturbed MDP. Our convergence results outperform the ones established for unregularized NPG methods (arXiv:1908.00261), and shed light upon the role of entropy regularization in accelerating convergence.

* V2 adds new proofs and improved results 
Viaarxiv icon

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model

Jun 17, 2020
Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Figure 1 for Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model

We investigate the sample efficiency of reinforcement learning in a $\gamma$-discounted infinite-horizon Markov decision process (MDP) with state space $\mathcal{S}$ and action space $\mathcal{A}$, assuming access to a generative model. Despite a number of prior work tackling this problem, a complete picture of the trade-offs between sample complexity and statistical accuracy is yet to be determined. In particular, prior results suffer from a sample size barrier, in the sense that their claimed statistical guarantees hold only when the sample size exceeds at least $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^2}$ (up to some log factor). The current paper overcomes this barrier by certifying the minimax optimality of model-based reinforcement learning as soon as the sample size exceeds the order of $\frac{|\mathcal{S}||\mathcal{A}|}{1-\gamma}$ (modulo some log factor). More specifically, a perturbed model-based planning algorithm provably finds an $\varepsilon$-optimal policy with an order of $\frac{|\mathcal{S}||\mathcal{A}| }{(1-\gamma)^3\varepsilon^2}\log\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)\varepsilon}$ samples for any $\varepsilon \in (0, \frac{1}{1-\gamma}]$. Along the way, we derive improved (instance-dependent) guarantees for model-based policy evaluation. To the best of our knowledge, this work provides the first minimax-optimal guarantee in a generative model that accommodates the entire range of sample sizes (beyond which finding a meaningful policy is information theoretically impossible).

Viaarxiv icon

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Jun 04, 2020
Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Figure 1 for Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning -- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function -- is at most on the order of \begin{equation*} \frac{1}{\mu_{\mathsf{min}}(1-\gamma)^5\varepsilon^2}+ \frac{t_{\mathsf{mix}}}{\mu_{\mathsf{min}}(1-\gamma)} \end{equation*} up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, $t_{\mathsf{mix}}$ and $\mu_{\mathsf{min}}$ denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the complexity in the case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the expense taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least $|\mathcal{S}||\mathcal{A}|$. Further, the scaling on the discount complexity can be improved by means of variance reduction.

Viaarxiv icon

Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

May 18, 2020
Tian Tong, Cong Ma, Yuejie Chi

Figure 1 for Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Figure 2 for Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Figure 3 for Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then seek to optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite nonconvexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices. The goal of this paper is to set forth a new algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as pre-conditioned or diagonally-scaled gradient descent, where the pre-conditioners are adaptive and iteration-varying with a minimal computational overhead. For low-rank matrix sensing and robust principal component analysis, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent. To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties. At the core of our analysis is the introduction of a new distance function that takes account of the pre-conditioners when measuring the distance between the iterates and the ground truth.

Viaarxiv icon

Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently

Nov 25, 2019
Laixi Shi, Yuejie Chi

Figure 1 for Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently
Figure 2 for Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently
Figure 3 for Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently
Figure 4 for Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently

Multi-channel sparse blind deconvolution, or convolutional sparse coding, refers to the problem of learning an unknown filter by observing its circulant convolutions with multiple input signals that are sparse. This problem finds numerous applications in signal processing, computer vision, and inverse problems. However, it is challenging to learn the filter efficiently due to the bilinear structure of the observations with respect to the unknown filter and inputs, leading to global ambiguities of identification. In this paper, we propose a novel approach based on nonconvex optimization over the sphere manifold by minimizing a smooth surrogate of the sparsity-promoting loss function. It is demonstrated that the manifold gradient descent with random initializations will provably recover the filter, up to scaling and shift ambiguity, as soon as the number of observations is sufficiently large under an appropriate random data model. Numerical experiments are provided to illustrate the performance of the proposed method with comparisons to existing methods.

* submitted 
Viaarxiv icon

Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees

Oct 09, 2019
Changxiao Cai, Gen Li, Yuejie Chi, H. Vincent Poor, Yuxin Chen

Figure 1 for Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees
Figure 2 for Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees
Figure 3 for Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees
Figure 4 for Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees

This paper is concerned with estimating the column space of an unknown low-rank matrix $\boldsymbol{A}^{\star}\in\mathbb{R}^{d_{1}\times d_{2}}$, given noisy and partial observations of its entries. There is no shortage of scenarios where the observations --- while being too noisy to support faithful recovery of the entire matrix --- still convey sufficient information to enable reliable estimation of the column space of interest. This is particularly evident and crucial for the highly unbalanced case where the column dimension $d_{2}$ far exceeds the row dimension $d_{1}$, which is the focal point of the current paper. We investigate an efficient spectral method, which operates upon the sample Gram matrix with diagonal deletion. We establish statistical guarantees for this method in terms of both $\ell_{2}$ and $\ell_{2,\infty}$ estimation accuracy, which improve upon prior results if $d_{2}$ is substantially larger than $d_{1}$. To illustrate the effectiveness of our findings, we develop consequences of our general theory for three applications of practical importance: (1) tensor completion from noisy data, (2) covariance estimation with missing data, and (3) community recovery in bipartite graphs. Our theory leads to improved performance guarantees for all three cases.

Viaarxiv icon

Communication-Efficient Distributed Optimization in Networks with Gradient Tracking

Sep 12, 2019
Boyue Li, Shicong Cen, Yuxin Chen, Yuejie Chi

Figure 1 for Communication-Efficient Distributed Optimization in Networks with Gradient Tracking
Figure 2 for Communication-Efficient Distributed Optimization in Networks with Gradient Tracking
Figure 3 for Communication-Efficient Distributed Optimization in Networks with Gradient Tracking
Figure 4 for Communication-Efficient Distributed Optimization in Networks with Gradient Tracking

There is a growing interest in large-scale machine learning and optimization over decentralized networks, e.g. in the context of multi-agent learning and federated learning. Due to the imminent need to alleviate the communication burden, the investigation of communication-efficient distributed optimization algorithms --- particularly for empirical risk minimization --- has flourished in recent years. A large faction of these algorithms have been developed for the master/slave setting, relying on the presence of a central parameter server that can communicate with all agents. This paper focuses on distributed optimization over the network-distributed or the decentralized setting, where each agent is only allowed to aggregate information from its neighbors over a network (namely, no centralized coordination is present). By properly adjusting the global gradient estimate via a tracking term, we develop a communication-efficient approximate Newton-type method, called Network-DANE, which generalizes DANE [Shamir et al., 2014] for decentralized networks. We establish linear convergence of Network-DANE for quadratic losses, which shed light on the impact of data homogeneity and network connectivity upon the rate of convergence. Our key algorithmic ideas can be applied, in a systematic manner, to obtain decentralized versions of other master/slave distributed algorithms. A notable example is our development of Network-SVRG, which employs stochastic variance reduction [Johnson and Zhang, 2013] at each agent to accelerate local computation. The proposed algorithms are built upon the primal formulation without resorting to the dual. Numerical evidence is provided to demonstrate the appealing performance of our algorithms over competitive baselines, in terms of both communication and computation efficiency.

Viaarxiv icon

Vector-Valued Graph Trend Filtering with Non-Convex Penalties

May 29, 2019
Rohan Varma, Harlin Lee, Jelena Kovačević, Yuejie Chi

Figure 1 for Vector-Valued Graph Trend Filtering with Non-Convex Penalties
Figure 2 for Vector-Valued Graph Trend Filtering with Non-Convex Penalties
Figure 3 for Vector-Valued Graph Trend Filtering with Non-Convex Penalties
Figure 4 for Vector-Valued Graph Trend Filtering with Non-Convex Penalties

We study the denoising of piecewise smooth graph signals that exhibit inhomogeneous levels of smoothness over a graph, where the value at each node can be vector-valued. We extend the graph trend filtering framework to denoising vector-valued graph signals with a family of non-convex regularizers that exhibit superior recovery performance over existing convex regularizers. We establish the statistical error rates of first-order stationary points of the proposed non-convex method for generic graphs using oracle inequalities. We further present an ADMM-based algorithm to solve the proposed method and analyze its convergence. We present numerical experiments on both synthetic and real-world data for denoising, support recovery, and semi-supervised classification.

* The first two authors contributed equally. This paper has been submitted to the IEEE Transactions on Signal and Information Processing over Networks. A preliminary version of partial results in this paper was presented at 2019 IEEE ICASSP 
Viaarxiv icon