Alert button
Picture for Chongyi Zheng

Chongyi Zheng

Alert button

Contrastive Difference Predictive Coding

Oct 31, 2023
Chongyi Zheng, Ruslan Salakhutdinov, Benjamin Eysenbach

Predicting and reasoning about the future lie at the heart of many time-series questions. For example, goal-conditioned reinforcement learning can be viewed as learning representations to predict which states are likely to be visited in the future. While prior methods have used contrastive predictive coding to model time series data, learning representations that encode long-term dependencies usually requires large amounts of data. In this paper, we introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events. We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL. Experiments demonstrate that, compared with prior RL methods, ours achieves $2 \times$ median improvement in success rates and can better cope with stochastic environments. In tabular settings, we show that our method is about $20 \times$ more sample efficient than the successor representation and $1500 \times$ more sample efficient than the standard (Monte Carlo) version of contrastive predictive coding.

* Website (https://chongyi-zheng.github.io/td_infonce) and code (https://github.com/chongyi-zheng/td_infonce) 
Viaarxiv icon

Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior

Oct 02, 2023
Ruihan Yang, Zhuoqun Chen, Jianhan Ma, Chongyi Zheng, Yiyu Chen, Quan Nguyen, Xiaolong Wang

Figure 1 for Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior
Figure 2 for Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior
Figure 3 for Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior
Figure 4 for Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior

The agility of animals, particularly in complex activities such as running, turning, jumping, and backflipping, stands as an exemplar for robotic system design. Transferring this suite of behaviors to legged robotic systems introduces essential inquiries: How can a robot be trained to learn multiple locomotion behaviors simultaneously? How can the robot execute these tasks with a smooth transition? And what strategies allow for the integrated application of these skills? This paper introduces the Versatile Instructable Motion prior (VIM) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks suitable for advanced robotic applications. Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions with Functionality reward and Stylization reward. While the Functionality reward guides the robot's ability to adopt varied skills, the Stylization reward ensures performance alignment with reference motions. Our evaluations of the VIM framework span both simulation environments and real-world deployment. To our understanding, this is the first work that allows a robot to concurrently learn diverse agile locomotion tasks using a singular controller. Further details and supportive media can be found at our project site: https://rchalyang.github.io/VIM .

* Further details and supportive media can be found at our project site: https://rchalyang.github.io/VIM 
Viaarxiv icon

BridgeData V2: A Dataset for Robot Learning at Scale

Aug 24, 2023
Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers, Kuan Fang, Chelsea Finn, Sergey Levine

Figure 1 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 2 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 3 for BridgeData V2: A Dataset for Robot Learning at Scale
Figure 4 for BridgeData V2: A Dataset for Robot Learning at Scale

We introduce BridgeData V2, a large and diverse dataset of robotic manipulation behaviors designed to facilitate research on scalable robot learning. BridgeData V2 contains 60,096 trajectories collected across 24 environments on a publicly available low-cost robot. BridgeData V2 provides extensive task and environment variability, leading to skills that can generalize across environments, domains, and institutions, making the dataset a useful resource for a broad range of researchers. Additionally, the dataset is compatible with a wide variety of open-vocabulary, multi-task learning methods conditioned on goal images or natural language instructions. In our experiments, we train 6 state-of-the-art imitation learning and offline reinforcement learning methods on our dataset, and find that they succeed on a suite of tasks requiring varying amounts of generalization. We also demonstrate that the performance of these methods improves with more data and higher capacity models, and that training on a greater variety of skills leads to improved generalization. By publicly sharing BridgeData V2 and our pre-trained models, we aim to accelerate research in scalable robot learning methods. Project page at https://rail-berkeley.github.io/bridgedata

* 9 pages 
Viaarxiv icon

Stabilizing Contrastive RL: Techniques for Offline Goal Reaching

Jun 06, 2023
Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 2 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 3 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 4 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching

In the same way that the computer vision (CV) and natural language processing (NLP) communities have developed self-supervised methods, reinforcement learning (RL) can be cast as a self-supervised problem: learning to reach any goal, without requiring human-specified rewards or labels. However, actually building a self-supervised foundation for RL faces some important challenges. Building on prior contrastive approaches to this RL problem, we conduct careful ablation experiments and discover that a shallow and wide architecture, combined with careful weight initialization and data augmentation, can significantly boost the performance of these contrastive RL approaches on challenging simulated benchmarks. Additionally, we demonstrate that, with these design decisions, contrastive approaches can solve real-world robotic manipulation tasks, with tasks being specified by a single goal image provided after training.

Viaarxiv icon

Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Oct 28, 2021
Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael R. Zhang, Jimmy Ba

Figure 1 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Figure 2 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Figure 3 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs
Figure 4 for Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Deep Reinforcement Learning (RL) is successful in solving many complex Markov Decision Processes (MDPs) problems. However, agents often face unanticipated environmental changes after deployment in the real world. These changes are often spurious and unrelated to the underlying problem, such as background shifts for visual input agents. Unfortunately, deep RL policies are usually sensitive to these changes and fail to act robustly against them. This resembles the problem of domain generalization in supervised learning. In this work, we study this problem for goal-conditioned RL agents. We propose a theoretical framework in the Block MDP setting that characterizes the generalizability of goal-conditioned policies to new environments. Under this framework, we develop a practical method PA-SkewFit that enhances domain generalization. The empirical evaluation shows that our goal-conditioned RL agent can perform well in various unseen test environments, improving by 50% over baselines.

* NeurIPS2021  
* 33 pages 
Viaarxiv icon

Learning Nearly Decomposable Value Functions Via Communication Minimization

Oct 11, 2019
Tonghan Wang, Jianhao Wang, Chongyi Zheng, Chongjie Zhang

Figure 1 for Learning Nearly Decomposable Value Functions Via Communication Minimization
Figure 2 for Learning Nearly Decomposable Value Functions Via Communication Minimization
Figure 3 for Learning Nearly Decomposable Value Functions Via Communication Minimization
Figure 4 for Learning Nearly Decomposable Value Functions Via Communication Minimization

Reinforcement learning encounters major challenges in multi-agent settings, such as scalability and non-stationarity. Recently, value function factorization learning emerges as a promising way to address these challenges in collaborative multi-agent systems. However, existing methods have been focusing on learning fully decentralized value function, which are not efficient for tasks requiring communication. To address this limitation, this paper presents a novel framework for learning nearly decomposable value functions with communication, with which agents act on their own most of the time but occasionally send messages to other agents in order for effective coordination. This framework hybridizes value function factorization learning and communication learning by introducing two information-theoretic regularizers. These regularizers are maximizing mutual information between decentralized Q functions and communication messages while minimizing the entropy of messages between agents. We show how to optimize these regularizers in a way that is easily integrated with existing value function factorization methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit micromanagement benchmark, our framework significantly outperforms baseline methods and allows to cut off more than $80\%$ communication without sacrificing the performance. The video of our experiments is available at https://sites.google.com/view/ndvf.

Viaarxiv icon