Alert button
Picture for Alekh Agarwal

Alekh Agarwal

Alert button

Oracle inequalities for computationally adaptive model selection

Aug 01, 2012
Alekh Agarwal, Peter L. Bartlett, John C. Duchi

Figure 1 for Oracle inequalities for computationally adaptive model selection

We analyze general model selection procedures using penalized empirical loss minimization under computational constraints. While classical model selection approaches do not consider computational aspects of performing model selection, we argue that any practical model selection procedure must not only trade off estimation and approximation error, but also the computational effort required to compute empirical minimizers for different function classes. We provide a framework for analyzing such problems, and we give algorithms for model selection under a computational budget. These algorithms satisfy oracle inequalities that show that the risk of the selected model is not much worse than if we had devoted all of our omputational budget to the optimal function class.

Viaarxiv icon

Ergodic Mirror Descent

Aug 01, 2012
John C. Duchi, Alekh Agarwal, Mikael Johansson, Michael I. Jordan

Figure 1 for Ergodic Mirror Descent
Figure 2 for Ergodic Mirror Descent

We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, but instead receive samples that are coupled over time. We show that as long as the source of randomness is suitably ergodic---it converges quickly enough to a stationary distribution---the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for stochastic optimization in high-dimensional spaces, peer-to-peer distributed optimization schemes, decision problems with dependent data, and stochastic optimization problems over combinatorial spaces.

* 35 pages, 2 figures 
Viaarxiv icon

Fast global convergence of gradient methods for high-dimensional statistical recovery

Jul 25, 2012
Alekh Agarwal, Sahand N. Negahban, Martin J. Wainwright

Figure 1 for Fast global convergence of gradient methods for high-dimensional statistical recovery
Figure 2 for Fast global convergence of gradient methods for high-dimensional statistical recovery
Figure 3 for Fast global convergence of gradient methods for high-dimensional statistical recovery
Figure 4 for Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical $M$-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension $\pdim$ to grow with (and possibly exceed) the sample size $\numobs$. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that projected gradient descent has a globally geometric rate of convergence up to the \emph{statistical precision} of the model, meaning the typical distance between the true unknown parameter $\theta^*$ and an optimal solution $\hat{\theta}$. This result is substantially sharper than previous convergence results, which yielded sublinear convergence, or linear convergence only up to the noise level. Our analysis applies to a wide range of $M$-estimators and statistical models, including sparse linear regression using Lasso ($\ell_1$-regularized regression); group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition. Overall, our analysis reveals interesting connections between statistical precision and computational efficiency in high-dimensional estimation.

Viaarxiv icon

Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions

Jul 18, 2012
Alekh Agarwal, Sahand Negahban, Martin J. Wainwright

Figure 1 for Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding an $\order(\pdim/T)$ convergence rate for strongly convex objectives in $\pdim$ dimensions, and an $\order(\sqrt{(\spindex \log \pdim)/T})$ convergence rate when the optimum is $\spindex$-sparse. Our algorithm is based on successively solving a series of $\ell_1$-regularized optimization problems using Nesterov's dual averaging algorithm. We establish that the error of our solution after $T$ iterations is at most $\order((\spindex \log\pdim)/T)$, with natural extensions to approximate sparsity. Our results apply to locally Lipschitz losses including the logistic, exponential, hinge and least-squares losses. By recourse to statistical minimax results, we show that our convergence rates are optimal up to multiplicative constant factors. The effectiveness of our approach is also confirmed in numerical simulations, in which we compare to several baselines on a least-squares regression problem.

* 2 figures 
Viaarxiv icon

The Generalization Ability of Online Algorithms for Dependent Data

Jun 07, 2012
Alekh Agarwal, John C. Duchi

Figure 1 for The Generalization Ability of Online Algorithms for Dependent Data

We study the generalization performance of online learning algorithms trained on samples coming from a dependent source of data. We show that the generalization error of any stable online algorithm concentrates around its regret--an easily computable statistic of the online performance of the algorithm--when the underlying ergodic process is $\beta$- or $\phi$-mixing. We show high probability error bounds assuming the loss function is convex, and we also establish sharp convergence rates and deviation bounds for strongly convex losses and several linear prediction problems such as linear and logistic regression, least-squares SVM, and boosting on dependent data. In addition, our results have straightforward applications to stochastic optimization with dependent data, and our analysis requires only martingale convergence arguments; we need not rely on more powerful statistical tools such as empirical process theory.

* 26 pages, 1 figure 
Viaarxiv icon

Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions

Mar 06, 2012
Alekh Agarwal, Sahand N. Negahban, Martin J. Wainwright

Figure 1 for Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
Figure 2 for Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions

We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation $\mathfrak{X}$ of the sum of an approximately) low rank matrix $\Theta^\star$ with a second matrix $\Gamma^\star$ endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including factor analysis, multi-task regression, and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair $(\Theta^\star, \Gamma^\star)$ obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results utilize a "spikiness" condition that is related to but milder than singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields non-asymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices $\Theta^\star$ that can be exactly or approximately low rank, and matrices $\Gamma^\star$ that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error. The sharpness of our predictions is confirmed by numerical simulations.

* Annals of Statistics 2012, Vol. 40, No. 2, 1171-1197  
* 41 pages, 2 figures 
Viaarxiv icon

Contextual Bandit Learning with Predictable Rewards

Mar 02, 2012
Alekh Agarwal, Miroslav Dudík, Satyen Kale, John Langford, Robert E. Schapire

Contextual bandit learning is a reinforcement learning problem where the learner repeatedly receives a set of features (context), takes an action and receives a reward based on the action and context. We consider this problem under a realizability assumption: there exists a function in a (known) function class, always capable of predicting the expected reward, given the action and context. Under this assumption, we show three things. We present a new algorithm---Regressor Elimination--- with a regret similar to the agnostic setting (i.e. in the absence of realizability assumption). We prove a new lower bound showing no algorithm can achieve superior performance in the worst case even with the realizability assumption. However, we do show that for any set of policies (mapping contexts to actions), there is a distribution over rewards (given context) such that our new algorithm has constant regret unlike the previous approaches.

Viaarxiv icon

Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization

Nov 20, 2011
Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, Martin J. Wainwright

Figure 1 for Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization

Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes.

Viaarxiv icon

Stochastic convex optimization with bandit feedback

Oct 08, 2011
Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, Alexander Rakhlin

Figure 1 for Stochastic convex optimization with bandit feedback
Figure 2 for Stochastic convex optimization with bandit feedback
Figure 3 for Stochastic convex optimization with bandit feedback
Figure 4 for Stochastic convex optimization with bandit feedback

This paper addresses the problem of minimizing a convex, Lipschitz function $f$ over a convex, compact set $\xset$ under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value $f(x)$ at any query point $x \in \xset$. The quantity of interest is the regret of the algorithm, which is the sum of the function values at algorithm's query points minus the optimal function value. We demonstrate a generalization of the ellipsoid algorithm that incurs $\otil(\poly(d)\sqrt{T})$ regret. Since any algorithm has regret at least $\Omega(\sqrt{T})$ on this problem, our algorithm is optimal in terms of the scaling with $T$.

Viaarxiv icon

Online and Batch Learning Algorithms for Data with Missing Features

Jun 16, 2011
Afshin Rostamizadeh, Alekh Agarwal, Peter Bartlett

Figure 1 for Online and Batch Learning Algorithms for Data with Missing Features
Figure 2 for Online and Batch Learning Algorithms for Data with Missing Features
Figure 3 for Online and Batch Learning Algorithms for Data with Missing Features
Figure 4 for Online and Batch Learning Algorithms for Data with Missing Features

We introduce new online and batch algorithms that are robust to data with missing features, a situation that arises in many practical applications. In the online setup, we allow for the comparison hypothesis to change as a function of the subset of features that is observed on any given round, extending the standard setting where the comparison hypothesis is fixed throughout. In the batch setup, we present a convex relation of a non-convex problem to jointly estimate an imputation function, used to fill in the values of missing features, along with the classification hypothesis. We prove regret bounds in the online setting and Rademacher complexity bounds for the batch i.i.d. setting. The algorithms are tested on several UCI datasets, showing superior performance over baselines.

* 27th Conference on Uncertainty in Artificial Intelligence (UAI 2011)  
Viaarxiv icon