Picture for Shoham Sabach

Shoham Sabach

Learning the Target Network in Function Space

Add code
Jun 03, 2024
Viaarxiv icon

MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

Add code
Jan 17, 2024
Figure 1 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Figure 2 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Figure 3 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Figure 4 for MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
Viaarxiv icon

Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate

Add code
Jan 05, 2024
Figure 1 for Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
Figure 2 for Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
Figure 3 for Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate
Viaarxiv icon

TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models

Add code
Oct 09, 2023
Figure 1 for TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Figure 2 for TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Figure 3 for TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Figure 4 for TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
Viaarxiv icon

Convex Bi-Level Optimization Problems with Non-smooth Outer Objective Function

Add code
Jul 17, 2023
Figure 1 for Convex Bi-Level Optimization Problems with Non-smooth Outer Objective Function
Figure 2 for Convex Bi-Level Optimization Problems with Non-smooth Outer Objective Function
Figure 3 for Convex Bi-Level Optimization Problems with Non-smooth Outer Objective Function
Figure 4 for Convex Bi-Level Optimization Problems with Non-smooth Outer Objective Function
Viaarxiv icon

TD Convergence: An Optimization Perspective

Add code
Jun 30, 2023
Figure 1 for TD Convergence: An Optimization Perspective
Viaarxiv icon

Resetting the Optimizer in Deep RL: An Empirical Study

Add code
Jun 30, 2023
Figure 1 for Resetting the Optimizer in Deep RL: An Empirical Study
Figure 2 for Resetting the Optimizer in Deep RL: An Empirical Study
Figure 3 for Resetting the Optimizer in Deep RL: An Empirical Study
Figure 4 for Resetting the Optimizer in Deep RL: An Empirical Study
Viaarxiv icon

Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization

Add code
Apr 06, 2019
Figure 1 for Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization
Figure 2 for Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization
Figure 3 for Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization
Figure 4 for Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization
Viaarxiv icon

Fast Generalized Conditional Gradient Method with Applications to Matrix Recovery Problems

Add code
Feb 15, 2018
Figure 1 for Fast Generalized Conditional Gradient Method with Applications to Matrix Recovery Problems
Figure 2 for Fast Generalized Conditional Gradient Method with Applications to Matrix Recovery Problems
Viaarxiv icon