Picture for Michael Laskin

Michael Laskin

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Vision-Language Models as a Source of Rewards

Add code
Dec 14, 2023
Figure 1 for Vision-Language Models as a Source of Rewards
Figure 2 for Vision-Language Models as a Source of Rewards
Figure 3 for Vision-Language Models as a Source of Rewards
Figure 4 for Vision-Language Models as a Source of Rewards
Viaarxiv icon

In-context Reinforcement Learning with Algorithm Distillation

Add code
Oct 25, 2022
Figure 1 for In-context Reinforcement Learning with Algorithm Distillation
Figure 2 for In-context Reinforcement Learning with Algorithm Distillation
Figure 3 for In-context Reinforcement Learning with Algorithm Distillation
Figure 4 for In-context Reinforcement Learning with Algorithm Distillation
Viaarxiv icon

Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning

Add code
Feb 08, 2022
Figure 1 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 2 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 3 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 4 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Viaarxiv icon

CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery

Add code
Feb 01, 2022
Figure 1 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 2 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 3 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 4 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Viaarxiv icon

URLB: Unsupervised Reinforcement Learning Benchmark

Add code
Oct 28, 2021
Figure 1 for URLB: Unsupervised Reinforcement Learning Benchmark
Figure 2 for URLB: Unsupervised Reinforcement Learning Benchmark
Figure 3 for URLB: Unsupervised Reinforcement Learning Benchmark
Figure 4 for URLB: Unsupervised Reinforcement Learning Benchmark
Viaarxiv icon

Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback

Add code
Aug 11, 2021
Figure 1 for Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Figure 2 for Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Figure 3 for Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Figure 4 for Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Viaarxiv icon

Hierarchical Few-Shot Imitation with Skill Transition Models

Add code
Jul 19, 2021
Figure 1 for Hierarchical Few-Shot Imitation with Skill Transition Models
Figure 2 for Hierarchical Few-Shot Imitation with Skill Transition Models
Figure 3 for Hierarchical Few-Shot Imitation with Skill Transition Models
Figure 4 for Hierarchical Few-Shot Imitation with Skill Transition Models
Viaarxiv icon

Decision Transformer: Reinforcement Learning via Sequence Modeling

Add code
Jun 24, 2021
Figure 1 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Figure 2 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Figure 3 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Figure 4 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Viaarxiv icon