Stable concurrent learning and control of dynamical systems is the subject of adaptive control. Adaptive control is a field with many practical applications and a rich theory, but much of the development for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between nonlinear adaptive control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for nonlinear adaptive control. We present a large set of new globally convergent adaptive control algorithms that are applicable both to linearly parameterized systems and to nonlinearly parameterized systems satisfying a certain monotonicity requirement. We adopt a variational formalism based on the Bregman Lagrangian to define a general framework that systematically generates higher-order in-time velocity gradient algorithms. We generalize our algorithms to the non-Euclidean setting and show that the Euler Lagrange equations for the Bregman Lagrangian lead to natural gradient and mirror descent-like adaptation laws with momentum that incorporate local geometry through a Hessian metric specified by a convex function. We prove that these non-Euclidean adaptation laws implicitly regularize the system model by minimizing the convex function that specifies the metric throughout adaptation. Local geometry imposed during adaptation thus may be used to select parameter vectors - out of the many that will lead to perfect tracking - for desired properties such as sparsity. We illustrate our analysis with simulations using a higher-order algorithm for nonlinearly parameterized systems to learn regularized hidden layer weights in a three-layer feedforward neural network.