Alert button
Picture for Erwan Lecarpentier

Erwan Lecarpentier

Alert button

On Constrained Optimization in Differentiable Neural Architecture Search

Add code
Bookmark button
Alert button
Jul 03, 2021
Kaitlin Maile, Erwan Lecarpentier, Hervé Luga, Dennis G. Wilson

Figure 1 for On Constrained Optimization in Differentiable Neural Architecture Search
Figure 2 for On Constrained Optimization in Differentiable Neural Architecture Search
Figure 3 for On Constrained Optimization in Differentiable Neural Architecture Search
Figure 4 for On Constrained Optimization in Differentiable Neural Architecture Search
Viaarxiv icon

Lipschitz Lifelong Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 17, 2020
Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman

Figure 1 for Lipschitz Lifelong Reinforcement Learning
Figure 2 for Lipschitz Lifelong Reinforcement Learning
Figure 3 for Lipschitz Lifelong Reinforcement Learning
Figure 4 for Lipschitz Lifelong Reinforcement Learning
Viaarxiv icon

Non-Stationary Markov Decision Processes, a Worst-Case Approach using Model-Based Reinforcement Learning, Extended version

Add code
Bookmark button
Alert button
May 24, 2019
Erwan Lecarpentier, Emmanuel Rachelson

Figure 1 for Non-Stationary Markov Decision Processes, a Worst-Case Approach using Model-Based Reinforcement Learning, Extended version
Figure 2 for Non-Stationary Markov Decision Processes, a Worst-Case Approach using Model-Based Reinforcement Learning, Extended version
Figure 3 for Non-Stationary Markov Decision Processes, a Worst-Case Approach using Model-Based Reinforcement Learning, Extended version
Viaarxiv icon

Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 22, 2019
Erwan Lecarpentier, Emmanuel Rachelson

Figure 1 for Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning
Figure 2 for Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning
Figure 3 for Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning
Viaarxiv icon

Open Loop Execution of Tree-Search Algorithms

Add code
Bookmark button
Alert button
May 03, 2018
Erwan Lecarpentier, Guillaume Infantes, Charles Lesire, Emmanuel Rachelson

Figure 1 for Open Loop Execution of Tree-Search Algorithms
Figure 2 for Open Loop Execution of Tree-Search Algorithms
Figure 3 for Open Loop Execution of Tree-Search Algorithms
Figure 4 for Open Loop Execution of Tree-Search Algorithms
Viaarxiv icon

Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous Soaring

Add code
Bookmark button
Alert button
Jul 18, 2017
Erwan Lecarpentier, Sebastian Rapp, Marc Melo, Emmanuel Rachelson

Figure 1 for Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous Soaring
Figure 2 for Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous Soaring
Figure 3 for Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous Soaring
Figure 4 for Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous Soaring
Viaarxiv icon