Picture for Igor Mordatch

Igor Mordatch

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Add code
Dec 22, 2023
Figure 1 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Figure 2 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Figure 3 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Figure 4 for Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Viaarxiv icon

Learning and Controlling Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy

Add code
Nov 21, 2023
Figure 1 for Learning and Controlling Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy
Figure 2 for Learning and Controlling Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy
Figure 3 for Learning and Controlling Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy
Figure 4 for Learning and Controlling Silicon Dopant Transitions in Graphene using Scanning Transmission Electron Microscopy
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Figure 1 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 2 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 3 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Figure 4 for Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Add code
Jul 28, 2023
Figure 1 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 2 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 3 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Figure 4 for RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Viaarxiv icon

Improving Factuality and Reasoning in Language Models through Multiagent Debate

Add code
May 23, 2023
Figure 1 for Improving Factuality and Reasoning in Language Models through Multiagent Debate
Figure 2 for Improving Factuality and Reasoning in Language Models through Multiagent Debate
Figure 3 for Improving Factuality and Reasoning in Language Models through Multiagent Debate
Figure 4 for Improving Factuality and Reasoning in Language Models through Multiagent Debate
Viaarxiv icon

Masked Trajectory Models for Prediction, Representation, and Control

Add code
May 04, 2023
Figure 1 for Masked Trajectory Models for Prediction, Representation, and Control
Figure 2 for Masked Trajectory Models for Prediction, Representation, and Control
Figure 3 for Masked Trajectory Models for Prediction, Representation, and Control
Figure 4 for Masked Trajectory Models for Prediction, Representation, and Control
Viaarxiv icon

Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning

Add code
Mar 27, 2023
Figure 1 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Figure 2 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Figure 3 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Figure 4 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Viaarxiv icon

PaLM-E: An Embodied Multimodal Language Model

Add code
Mar 06, 2023
Figure 1 for PaLM-E: An Embodied Multimodal Language Model
Figure 2 for PaLM-E: An Embodied Multimodal Language Model
Figure 3 for PaLM-E: An Embodied Multimodal Language Model
Figure 4 for PaLM-E: An Embodied Multimodal Language Model
Viaarxiv icon

Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control

Add code
Mar 01, 2023
Figure 1 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Figure 2 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Figure 3 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Figure 4 for Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control
Viaarxiv icon