Abstract:The low-rank adaptation (LoRA) algorithm for fine-tuning large models has grown popular in recent years due to its remarkable performance and low computational requirements. LoRA trains two ``adapter" matrices that form a low-rank representation of the model parameters, thereby massively reducing the number of parameters that need to be updated at every step. Although LoRA is simple, its convergence is poorly understood due to the lack of Lipschitz smoothness, a key condition for classic convergence analyses. As a result, current theoretical results only consider asymptotic behavior or assume strong boundedness conditions which artificially enforce Lipschitz smoothness. In this work, we provide for the first time a non-asymptotic convergence analysis of the \textit{original LoRA gradient descent} algorithm, which reflects widespread practice, without such assumptions. Our work relies on three key steps: i) reformulating the problem in terms of the outer product of the stacked adapter matrices, ii) a modified descent lemma for the ``Lipschitz-like" reparametrized function, and iii) controlling the step size. With this approach, we prove that LoRA gradient descent converges to a stationary point at rate $O(\frac{1}{\log T})$, where $T$ is the number of iterations.
Abstract:Machine unlearning algorithms aim to efficiently remove data from a model without retraining it from scratch, in order to enforce data privacy, remove corrupted or outdated data, or respect a user's ``right to be forgotten." Certified machine unlearning is a strong theoretical guarantee that quantifies the extent to which data is erased from the model weights. Most prior works in certified unlearning focus on models trained on convex or strongly convex loss functions, which benefit from convenient convergence guarantees and the existence of global minima. For nonconvex objectives, existing algorithms rely on limiting assumptions and expensive computations that hinder practical implementations. In this work, we propose a simple first-order algorithm for unlearning on general nonconvex loss functions which unlearns by ``rewinding" to an earlier step during the learning process and then performs gradient descent on the loss function of the retained data points. Our algorithm is black-box, in that it can be directly applied to models pretrained with vanilla gradient descent with no prior consideration of unlearning. We prove $(\epsilon, \delta)$ certified unlearning and performance guarantees that establish the privacy-utility-complexity tradeoff of our algorithm, with special consideration for nonconvex functions that satisfy the Polyak-Lojasiewicz inequality.
Abstract:Although the convergence of policy gradient algorithms to first-order stationary points is well-established, the objective functions of reinforcement learning problems are typically highly nonconvex. Therefore, recent work has focused on two extensions: ``global" convergence guarantees under regularity assumptions on the function structure, and second-order guarantees for escaping saddle points and convergence to true local minima. Our work expands on the latter approach, avoiding the restrictive assumptions of the former that may not apply to general objective functions. Existing results on vanilla policy gradient only consider an unbiased gradient estimator, but practical implementations under the infinite-horizon discounted setting, including both Monte-Carlo methods and actor-critic methods, involve gradient descent updates with a biased gradient estimator. We present preliminary results on the convergence of biased policy gradient algorithms to second-order stationary points, leveraging proof techniques from nonconvex optimization. In our next steps we aim to provide the first finite-time second-order convergence analysis for actor-critic algorithms.