Picture for Yannick Schroecker

Yannick Schroecker

Vision-Language Models as a Source of Rewards

Add code
Dec 14, 2023
Figure 1 for Vision-Language Models as a Source of Rewards
Figure 2 for Vision-Language Models as a Source of Rewards
Figure 3 for Vision-Language Models as a Source of Rewards
Figure 4 for Vision-Language Models as a Source of Rewards
Viaarxiv icon

Structured State Space Models for In-Context Reinforcement Learning

Add code
Mar 09, 2023
Figure 1 for Structured State Space Models for In-Context Reinforcement Learning
Figure 2 for Structured State Space Models for In-Context Reinforcement Learning
Figure 3 for Structured State Space Models for In-Context Reinforcement Learning
Figure 4 for Structured State Space Models for In-Context Reinforcement Learning
Viaarxiv icon

Human-Timescale Adaptation in an Open-Ended Task Space

Add code
Jan 18, 2023
Figure 1 for Human-Timescale Adaptation in an Open-Ended Task Space
Figure 2 for Human-Timescale Adaptation in an Open-Ended Task Space
Figure 3 for Human-Timescale Adaptation in an Open-Ended Task Space
Figure 4 for Human-Timescale Adaptation in an Open-Ended Task Space
Viaarxiv icon

Meta-Gradients in Non-Stationary Environments

Add code
Sep 13, 2022
Figure 1 for Meta-Gradients in Non-Stationary Environments
Figure 2 for Meta-Gradients in Non-Stationary Environments
Figure 3 for Meta-Gradients in Non-Stationary Environments
Figure 4 for Meta-Gradients in Non-Stationary Environments
Viaarxiv icon

Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality

Add code
May 26, 2022
Figure 1 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Figure 2 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Figure 3 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Figure 4 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Viaarxiv icon

Bootstrapped Meta-Learning

Add code
Sep 09, 2021
Figure 1 for Bootstrapped Meta-Learning
Figure 2 for Bootstrapped Meta-Learning
Figure 3 for Bootstrapped Meta-Learning
Figure 4 for Bootstrapped Meta-Learning
Viaarxiv icon

Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning

Add code
Feb 15, 2020
Figure 1 for Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning
Figure 2 for Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning
Figure 3 for Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning
Figure 4 for Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning
Viaarxiv icon

Active Learning within Constrained Environments through Imitation of an Expert Questioner

Add code
Jul 01, 2019
Figure 1 for Active Learning within Constrained Environments through Imitation of an Expert Questioner
Figure 2 for Active Learning within Constrained Environments through Imitation of an Expert Questioner
Figure 3 for Active Learning within Constrained Environments through Imitation of an Expert Questioner
Figure 4 for Active Learning within Constrained Environments through Imitation of an Expert Questioner
Viaarxiv icon

Generative predecessor models for sample-efficient imitation learning

Add code
Apr 01, 2019
Figure 1 for Generative predecessor models for sample-efficient imitation learning
Figure 2 for Generative predecessor models for sample-efficient imitation learning
Viaarxiv icon

Imitating Latent Policies from Observation

Add code
May 24, 2018
Figure 1 for Imitating Latent Policies from Observation
Figure 2 for Imitating Latent Policies from Observation
Figure 3 for Imitating Latent Policies from Observation
Figure 4 for Imitating Latent Policies from Observation
Viaarxiv icon