Picture for Lisa Lee

Lisa Lee

Shane

Gemma: Open Models Based on Gemini Research and Technology

Add code
Mar 13, 2024
Figure 1 for Gemma: Open Models Based on Gemini Research and Technology
Figure 2 for Gemma: Open Models Based on Gemini Research and Technology
Figure 3 for Gemma: Open Models Based on Gemini Research and Technology
Figure 4 for Gemma: Open Models Based on Gemini Research and Technology
Viaarxiv icon

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Guide Your Agent with Adaptive Multimodal Rewards

Add code
Sep 19, 2023
Figure 1 for Guide Your Agent with Adaptive Multimodal Rewards
Figure 2 for Guide Your Agent with Adaptive Multimodal Rewards
Figure 3 for Guide Your Agent with Adaptive Multimodal Rewards
Figure 4 for Guide Your Agent with Adaptive Multimodal Rewards
Viaarxiv icon

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Add code
Jul 28, 2023
Figure 1 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 2 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 3 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 4 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Viaarxiv icon

Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation

Add code
Jul 07, 2023
Figure 1 for Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation
Figure 2 for Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation
Figure 3 for Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation
Figure 4 for Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation
Viaarxiv icon

Barkour: Benchmarking Animal-level Agility with Quadruped Robots

Add code
May 24, 2023
Figure 1 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Figure 2 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Figure 3 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Figure 4 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Viaarxiv icon

Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models

Add code
Oct 24, 2022
Figure 1 for Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models
Figure 2 for Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models
Figure 3 for Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models
Figure 4 for Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models
Viaarxiv icon

FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners

Add code
Oct 24, 2022
Figure 1 for FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners
Figure 2 for FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners
Figure 3 for FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners
Figure 4 for FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners
Viaarxiv icon