Picture for Harris Chan

Harris Chan

Vision-Language Models as a Source of Rewards

Add code
Dec 14, 2023
Figure 1 for Vision-Language Models as a Source of Rewards
Figure 2 for Vision-Language Models as a Source of Rewards
Figure 3 for Vision-Language Models as a Source of Rewards
Figure 4 for Vision-Language Models as a Source of Rewards
Viaarxiv icon

STEVE-1: A Generative Model for Text-to-Behavior in Minecraft

Add code
Jun 05, 2023
Figure 1 for STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
Figure 2 for STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
Figure 3 for STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
Figure 4 for STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
Viaarxiv icon

Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models

Add code
Nov 22, 2022
Figure 1 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Figure 2 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Figure 3 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Figure 4 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Viaarxiv icon

Large Language Models Are Human-Level Prompt Engineers

Add code
Nov 03, 2022
Figure 1 for Large Language Models Are Human-Level Prompt Engineers
Figure 2 for Large Language Models Are Human-Level Prompt Engineers
Figure 3 for Large Language Models Are Human-Level Prompt Engineers
Figure 4 for Large Language Models Are Human-Level Prompt Engineers
Viaarxiv icon

Inner Monologue: Embodied Reasoning through Planning with Language Models

Add code
Jul 12, 2022
Figure 1 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 2 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 3 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 4 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Viaarxiv icon

Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Add code
Oct 28, 2021
Figure 1 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Figure 2 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Figure 3 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Figure 4 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Viaarxiv icon

Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels

Add code
Oct 09, 2020
Figure 1 for Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels
Figure 2 for Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels
Figure 3 for Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels
Figure 4 for Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels
Viaarxiv icon

Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning

Add code
Jul 06, 2020
Figure 1 for Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning
Figure 2 for Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning
Figure 3 for Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning
Figure 4 for Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning
Viaarxiv icon

An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality

Add code
Feb 14, 2020
Figure 1 for An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
Figure 2 for An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
Figure 3 for An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
Figure 4 for An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality
Viaarxiv icon

Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise

Add code
Apr 03, 2019
Figure 1 for Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise
Figure 2 for Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise
Figure 3 for Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise
Figure 4 for Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise
Viaarxiv icon