Picture for Vincent Micheli

Vincent Micheli

Efficient World Models with Context-Aware Tokenization

Add code
Jun 27, 2024
Viaarxiv icon

Diffusion for World Modeling: Visual Details Matter in Atari

Add code
May 20, 2024
Figure 1 for Diffusion for World Modeling: Visual Details Matter in Atari
Figure 2 for Diffusion for World Modeling: Visual Details Matter in Atari
Figure 3 for Diffusion for World Modeling: Visual Details Matter in Atari
Figure 4 for Diffusion for World Modeling: Visual Details Matter in Atari
Viaarxiv icon

Transformers are Sample Efficient World Models

Add code
Sep 01, 2022
Figure 1 for Transformers are Sample Efficient World Models
Figure 2 for Transformers are Sample Efficient World Models
Figure 3 for Transformers are Sample Efficient World Models
Figure 4 for Transformers are Sample Efficient World Models
Viaarxiv icon

MineRL Diamond 2021 Competition: Overview, Results, and Lessons Learned

Add code
Feb 17, 2022
Figure 1 for MineRL Diamond 2021 Competition: Overview, Results, and Lessons Learned
Figure 2 for MineRL Diamond 2021 Competition: Overview, Results, and Lessons Learned
Figure 3 for MineRL Diamond 2021 Competition: Overview, Results, and Lessons Learned
Figure 4 for MineRL Diamond 2021 Competition: Overview, Results, and Lessons Learned
Viaarxiv icon

Language Models are Few-Shot Butlers

Add code
Apr 16, 2021
Figure 1 for Language Models are Few-Shot Butlers
Figure 2 for Language Models are Few-Shot Butlers
Figure 3 for Language Models are Few-Shot Butlers
Viaarxiv icon

Structural analysis of an all-purpose question answering model

Add code
Apr 13, 2021
Figure 1 for Structural analysis of an all-purpose question answering model
Figure 2 for Structural analysis of an all-purpose question answering model
Figure 3 for Structural analysis of an all-purpose question answering model
Figure 4 for Structural analysis of an all-purpose question answering model
Viaarxiv icon

On the importance of pre-training data volume for compact language models

Add code
Oct 09, 2020
Figure 1 for On the importance of pre-training data volume for compact language models
Figure 2 for On the importance of pre-training data volume for compact language models
Figure 3 for On the importance of pre-training data volume for compact language models
Viaarxiv icon

Multi-task Reinforcement Learning with a Planning Quasi-Metric

Add code
Feb 08, 2020
Figure 1 for Multi-task Reinforcement Learning with a Planning Quasi-Metric
Figure 2 for Multi-task Reinforcement Learning with a Planning Quasi-Metric
Figure 3 for Multi-task Reinforcement Learning with a Planning Quasi-Metric
Figure 4 for Multi-task Reinforcement Learning with a Planning Quasi-Metric
Viaarxiv icon