Abstract:Construction of confidence intervals and hypothesis tests for functionals based on asymptotically normal estimators is a classical topic in statistical inference. The simplest and in many cases optimal inference procedure is the Wald interval or the likelihood ratio test, both of which require an estimator and an estimate of the asymptotic variance of the estimator. Estimators obtained from online/sequential algorithms forces one to consider the computational aspects of the inference problem, i.e., one cannot access all of the data as many times as needed. Several works on this topic explored the online estimation of asymptotic variance. In this article, we propose computationally efficient, rate-optimal, and asymptotically valid confidence regions based on the output of online algorithms {\em without} estimating the asymptotic variance. As a special case, this implies inference from any algorithm that yields an asymptotically normal estimator. We focus our efforts on stochastic gradient descent with Polyak averaging to understand the practical performance of the proposed method.
Abstract:We consider first order expansions of convex penalized estimators in high-dimensional regression problems with random designs. Our setting includes linear regression and logistic regression as special cases. For a given penalty function $h$ and the corresponding penalized estimator $\hat\beta$, we construct a quantity $\eta$, the first order expansion of $\hat\beta$, such that the distance between $\hat\beta$ and $\eta$ is an order of magnitude smaller than the estimation error $\|\hat{\beta} - \beta^*\|$. In this sense, the first order expansion $\eta$ can be thought of as a generalization of influence functions from the mathematical statistics literature to regularized estimators in high-dimensions. Such first order expansion implies that the risk of $\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a precise characterization of the MSE of $\hat\beta$; this characterization takes a particularly simple form for isotropic design. Such first order expansion also leads to inference results based on $\hat{\beta}$. We provide sufficient conditions for the existence of such first order expansion for three regularizers: the Lasso in its constrained form, the lasso in its penalized form, and the Group-Lasso. The results apply to general loss functions under some conditions and those conditions are satisfied for the squared loss in linear regression and for the logistic loss in the logistic model.