Alert button
Picture for Patrick Yin

Patrick Yin

Alert button

Stabilizing Contrastive RL: Techniques for Offline Goal Reaching

Jun 06, 2023
Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 2 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 3 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 4 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching

In the same way that the computer vision (CV) and natural language processing (NLP) communities have developed self-supervised methods, reinforcement learning (RL) can be cast as a self-supervised problem: learning to reach any goal, without requiring human-specified rewards or labels. However, actually building a self-supervised foundation for RL faces some important challenges. Building on prior contrastive approaches to this RL problem, we conduct careful ablation experiments and discover that a shallow and wide architecture, combined with careful weight initialization and data augmentation, can significantly boost the performance of these contrastive RL approaches on challenging simulated benchmarks. Additionally, we demonstrate that, with these design decisions, contrastive approaches can solve real-world robotic manipulation tasks, with tasks being specified by a single goal image provided after training.

Viaarxiv icon

Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks

Oct 12, 2022
Kuan Fang, Patrick Yin, Ashvin Nair, Homer Walke, Gengchen Yan, Sergey Levine

Figure 1 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Figure 2 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Figure 3 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Figure 4 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks

The utilization of broad datasets has proven to be crucial for generalization for a wide range of fields. However, how to effectively make use of diverse multi-task data for novel downstream tasks still remains a grand challenge in robotics. To tackle this challenge, we introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data, in combination with online fine-tuning guided by subgoals in learned lossy representation space. When faced with a novel task goal, the framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems. Learned from the broad data, the lossy representation emphasizes task-relevant information about states and goals while abstracting away redundant contexts that hinder generalization. It thus enables subgoal planning for unseen tasks, provides a compact input to the policy, and facilitates reward shaping during fine-tuning. We show that our framework can be pre-trained on large-scale datasets of robot experiences from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.

* CoRL 2022 
Viaarxiv icon

Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space

May 17, 2022
Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine

Figure 1 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Figure 2 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Figure 3 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Figure 4 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space

General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments. To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach configurable goals for a wide range of tasks on command. However, such goal-conditioned policies are notoriously difficult and time-consuming to train from scratch. In this paper, we propose Planning to Practice (PTP), a method that makes it practical to train goal-conditioned policies for long-horizon tasks that require multiple distinct types of interactions to solve. Our approach is based on two key ideas. First, we decompose the goal-reaching problem hierarchically, with a high-level planner that sets intermediate subgoals using conditional subgoal generators in the latent space for a low-level model-free policy. Second, we propose a hybrid approach which first pre-trains both the conditional subgoal generator and the policy on previously collected data through offline reinforcement learning, and then fine-tunes the policy via online exploration. This fine-tuning process is itself facilitated by the planned subgoals, which breaks down the original target task into short-horizon goal-reaching tasks that are significantly easier to learn. We conduct experiments in both the simulation and real world, in which the policy is pre-trained on demonstrations of short primitive behaviors and fine-tuned for temporally extended tasks that are unseen in the offline data. Our experimental results show that PTP can generate feasible sequences of subgoals that enable the policy to efficiently solve the target tasks.

Viaarxiv icon

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning

Apr 28, 2022
Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine

Figure 1 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Figure 2 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Figure 3 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Figure 4 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning

Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems. Traditionally in goal-conditioned RL, an agent is provided with the exact goal they intend to reach. However, it is often not realistic to know the configuration of the goal before performing a task. A more scalable framework would allow us to provide the agent with an example of an analogous task, and have the agent then infer what the goal should be for its current state. We propose a new form of state abstraction called goal-conditioned bisimulation that captures functional equivariance, allowing for the reuse of skills to achieve new goals. We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks. Further, we prove that this learned representation is sufficient not only for goal conditioned tasks, but is amenable to any downstream task described by a state-only reward function. Videos can be found at https://sites.google.com/view/gc-bisimulation.

* 20 Pages, 15 Figures, 4 Tables 
Viaarxiv icon