Picture for Mikael Henaff

Mikael Henaff

Generalization to New Sequential Decision Making Tasks with In-Context Learning

Add code
Dec 06, 2023
Viaarxiv icon

Motif: Intrinsic Motivation from Artificial Intelligence Feedback

Add code
Sep 29, 2023
Figure 1 for Motif: Intrinsic Motivation from Artificial Intelligence Feedback
Figure 2 for Motif: Intrinsic Motivation from Artificial Intelligence Feedback
Figure 3 for Motif: Intrinsic Motivation from Artificial Intelligence Feedback
Figure 4 for Motif: Intrinsic Motivation from Artificial Intelligence Feedback
Viaarxiv icon

A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs

Add code
Jun 05, 2023
Figure 1 for A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
Figure 2 for A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
Figure 3 for A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
Figure 4 for A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs
Viaarxiv icon

Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories

Add code
Oct 12, 2022
Figure 1 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Figure 2 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Figure 3 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Figure 4 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Viaarxiv icon

Exploration via Elliptical Episodic Bonuses

Add code
Oct 11, 2022
Figure 1 for Exploration via Elliptical Episodic Bonuses
Figure 2 for Exploration via Elliptical Episodic Bonuses
Figure 3 for Exploration via Elliptical Episodic Bonuses
Figure 4 for Exploration via Elliptical Episodic Bonuses
Viaarxiv icon

PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning

Add code
Aug 13, 2020
Figure 1 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 2 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 3 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 4 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Viaarxiv icon

Explicit Explore-Exploit Algorithms in Continuous State Spaces

Add code
Dec 02, 2019
Figure 1 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Figure 2 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Figure 3 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Figure 4 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Viaarxiv icon

Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning

Add code
Nov 13, 2019
Figure 1 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Figure 2 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Figure 3 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Figure 4 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Viaarxiv icon

Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic

Add code
Jan 08, 2019
Figure 1 for Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Figure 2 for Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Figure 3 for Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Figure 4 for Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic
Viaarxiv icon

Model-Based Planning with Discrete and Continuous Actions

Add code
Apr 04, 2018
Figure 1 for Model-Based Planning with Discrete and Continuous Actions
Figure 2 for Model-Based Planning with Discrete and Continuous Actions
Figure 3 for Model-Based Planning with Discrete and Continuous Actions
Figure 4 for Model-Based Planning with Discrete and Continuous Actions
Viaarxiv icon