Picture for Sergey Levine

Sergey Levine

Stanford University

Yell At Your Robot: Improving On-the-Fly from Language Corrections

Add code
Mar 19, 2024
Viaarxiv icon

DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

Add code
Mar 19, 2024
Figure 1 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 2 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 3 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 4 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Viaarxiv icon

Unfamiliar Finetuning Examples Control How Language Models Hallucinate

Add code
Mar 08, 2024
Figure 1 for Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Figure 2 for Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Figure 3 for Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Figure 4 for Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Viaarxiv icon

Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference

Add code
Mar 06, 2024
Viaarxiv icon

Stop Regressing: Training Value Functions via Classification for Scalable Deep RL

Add code
Mar 06, 2024
Figure 1 for Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
Figure 2 for Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
Figure 3 for Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
Figure 4 for Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
Viaarxiv icon

MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual Prompting

Add code
Mar 05, 2024
Viaarxiv icon

SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation

Add code
Mar 01, 2024
Viaarxiv icon

ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL

Add code
Feb 29, 2024
Figure 1 for ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
Figure 2 for ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
Figure 3 for ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
Figure 4 for ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
Viaarxiv icon

Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation

Add code
Feb 29, 2024
Figure 1 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Figure 2 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Figure 3 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Figure 4 for Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
Viaarxiv icon

Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control

Add code
Feb 28, 2024
Figure 1 for Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Figure 2 for Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Figure 3 for Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Figure 4 for Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Viaarxiv icon