Alert button
Picture for Lillian J. Ratliff

Lillian J. Ratliff

Alert button

Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms

Sep 25, 2021
Liyuan Zheng, Tanner Fiez, Zane Alumbaugh, Benjamin Chasnov, Lillian J. Ratliff

Figure 1 for Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms
Figure 2 for Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms
Figure 3 for Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms
Figure 4 for Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms
Viaarxiv icon

Which Echo Chamber? Regions of Attraction in Learning with Decision-Dependent Distributions

Jun 30, 2021
Roy Dong, Lillian J. Ratliff

Figure 1 for Which Echo Chamber? Regions of Attraction in Learning with Decision-Dependent Distributions
Figure 2 for Which Echo Chamber? Regions of Attraction in Learning with Decision-Dependent Distributions
Viaarxiv icon

Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization

Jun 16, 2021
Chinmay Maheshwari, Chih-Yuan Chiu, Eric Mazumdar, S. Shankar Sastry, Lillian J. Ratliff

Figure 1 for Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization
Figure 2 for Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization
Figure 3 for Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization
Figure 4 for Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization
Viaarxiv icon

Minimax Optimization with Smooth Algorithmic Adversaries

Jun 02, 2021
Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff

Figure 1 for Minimax Optimization with Smooth Algorithmic Adversaries
Figure 2 for Minimax Optimization with Smooth Algorithmic Adversaries
Figure 3 for Minimax Optimization with Smooth Algorithmic Adversaries
Figure 4 for Minimax Optimization with Smooth Algorithmic Adversaries
Viaarxiv icon

Function Design for Improved Competitive Ratio in Online Resource Allocation with Procurement Costs

Dec 23, 2020
Mitas Ray, Omid Sadeghi, Lillian J. Ratliff, Maryam Fazel

Figure 1 for Function Design for Improved Competitive Ratio in Online Resource Allocation with Procurement Costs
Figure 2 for Function Design for Improved Competitive Ratio in Online Resource Allocation with Procurement Costs
Viaarxiv icon

Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks

Mar 20, 2020
Liyuan Zheng, Yuanyuan Shi, Lillian J. Ratliff, Baosen Zhang

Figure 1 for Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks
Figure 2 for Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks
Figure 3 for Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks
Figure 4 for Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks
Viaarxiv icon

Constrained Upper Confidence Reinforcement Learning

Jan 26, 2020
Liyuan Zheng, Lillian J. Ratliff

Figure 1 for Constrained Upper Confidence Reinforcement Learning
Figure 2 for Constrained Upper Confidence Reinforcement Learning
Figure 3 for Constrained Upper Confidence Reinforcement Learning
Figure 4 for Constrained Upper Confidence Reinforcement Learning
Viaarxiv icon

Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings

Jul 08, 2019
Eric Mazumdar, Lillian J. Ratliff, Michael I. Jordan, S. Shankar Sastry

Figure 1 for Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings
Figure 2 for Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings
Figure 3 for Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings
Figure 4 for Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings
Viaarxiv icon

Convergence of Learning Dynamics in Stackelberg Games

Jun 04, 2019
Tanner Fiez, Benjamin Chasnov, Lillian J. Ratliff

Figure 1 for Convergence of Learning Dynamics in Stackelberg Games
Figure 2 for Convergence of Learning Dynamics in Stackelberg Games
Viaarxiv icon

Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings

May 30, 2019
Benjamin Chasnov, Lillian J. Ratliff, Eric Mazumdar, Samuel A. Burden

Figure 1 for Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings
Figure 2 for Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings
Figure 3 for Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings
Figure 4 for Convergence Analysis of Gradient-Based Learning with Non-Uniform Learning Rates in Non-Cooperative Multi-Agent Settings
Viaarxiv icon