Alert button
Picture for Lucas Lehnert

Lucas Lehnert

Alert button

Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping

Add code
Bookmark button
Alert button
Feb 21, 2024
Lucas Lehnert, Sainbayar Sukhbaatar, Paul Mcvay, Michael Rabbat, Yuandong Tian

Viaarxiv icon

Maximum State Entropy Exploration using Predecessor and Successor Representations

Add code
Bookmark button
Alert button
Jun 26, 2023
Arnav Kumar Jain, Lucas Lehnert, Irina Rish, Glen Berseth

Figure 1 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Figure 2 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Figure 3 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Figure 4 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Viaarxiv icon

IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control

Add code
Bookmark button
Alert button
Jun 01, 2023
Rohan Chitnis, Yingchen Xu, Bobak Hashemi, Lucas Lehnert, Urun Dogan, Zheqing Zhu, Olivier Delalleau

Figure 1 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Figure 2 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Figure 3 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Figure 4 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Viaarxiv icon

Reward-Predictive Clustering

Add code
Bookmark button
Alert button
Nov 07, 2022
Lucas Lehnert, Michael J. Frank, Michael L. Littman

Figure 1 for Reward-Predictive Clustering
Figure 2 for Reward-Predictive Clustering
Figure 3 for Reward-Predictive Clustering
Figure 4 for Reward-Predictive Clustering
Viaarxiv icon

Successor Features Support Model-based and Model-free Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 31, 2019
Lucas Lehnert, Michael L. Littman

Figure 1 for Successor Features Support Model-based and Model-free Reinforcement Learning
Figure 2 for Successor Features Support Model-based and Model-free Reinforcement Learning
Figure 3 for Successor Features Support Model-based and Model-free Reinforcement Learning
Figure 4 for Successor Features Support Model-based and Model-free Reinforcement Learning
Viaarxiv icon

Mitigating Planner Overfitting in Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 03, 2018
Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman

Figure 1 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 2 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 3 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 4 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Viaarxiv icon

Transfer with Model Features in Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 04, 2018
Lucas Lehnert, Michael L. Littman

Figure 1 for Transfer with Model Features in Reinforcement Learning
Figure 2 for Transfer with Model Features in Reinforcement Learning
Figure 3 for Transfer with Model Features in Reinforcement Learning
Figure 4 for Transfer with Model Features in Reinforcement Learning
Viaarxiv icon

Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 31, 2017
Lucas Lehnert, Stefanie Tellex, Michael L. Littman

Figure 1 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Figure 2 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Figure 3 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Figure 4 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Viaarxiv icon

Policy Gradient Methods for Off-policy Control

Add code
Bookmark button
Alert button
Dec 13, 2015
Lucas Lehnert, Doina Precup

Figure 1 for Policy Gradient Methods for Off-policy Control
Figure 2 for Policy Gradient Methods for Off-policy Control
Viaarxiv icon