Alert button
Picture for Robin Walters

Robin Walters

Alert button

Leveraging Symmetries in Pick and Place

Aug 15, 2023
Haojie Huang, Dian Wang, Arsh Tangri, Robin Walters, Robert Platt

Figure 1 for Leveraging Symmetries in Pick and Place
Figure 2 for Leveraging Symmetries in Pick and Place
Figure 3 for Leveraging Symmetries in Pick and Place
Figure 4 for Leveraging Symmetries in Pick and Place

Robotic pick and place tasks are symmetric under translations and rotations of both the object to be picked and the desired place pose. For example, if the pick object is rotated or translated, then the optimal pick action should also rotate or translate. The same is true for the place pose; if the desired place pose changes, then the place action should also transform accordingly. A recently proposed pick and place framework known as Transporter Net captures some of these symmetries, but not all. This paper analytically studies the symmetries present in planar robotic pick and place and proposes a method of incorporating equivariant neural models into Transporter Net in a way that captures all symmetries. The new model, which we call Equivariant Transporter Net, is equivariant to both pick and place symmetries and can immediately generalize pick and place knowledge to different pick and place poses. We evaluate the new model empirically and show that it is much more sample efficient than the non-symmetric version, resulting in a system that can imitate demonstrated pick and place behavior using very few human demonstrations on a variety of imitation learning tasks.

* arXiv admin note: substantial text overlap with arXiv:2202.09400 
Viaarxiv icon

Disentangling Node Attributes from Graph Topology for Improved Generalizability in Link Prediction

Jul 17, 2023
Ayan Chatterjee, Robin Walters, Giulia Menichetti, Tina Eliassi-Rad

Figure 1 for Disentangling Node Attributes from Graph Topology for Improved Generalizability in Link Prediction
Figure 2 for Disentangling Node Attributes from Graph Topology for Improved Generalizability in Link Prediction
Figure 3 for Disentangling Node Attributes from Graph Topology for Improved Generalizability in Link Prediction
Figure 4 for Disentangling Node Attributes from Graph Topology for Improved Generalizability in Link Prediction

Link prediction is a crucial task in graph machine learning with diverse applications. We explore the interplay between node attributes and graph topology and demonstrate that incorporating pre-trained node attributes improves the generalization power of link prediction models. Our proposed method, UPNA (Unsupervised Pre-training of Node Attributes), solves the inductive link prediction problem by learning a function that takes a pair of node attributes and predicts the probability of an edge, as opposed to Graph Neural Networks (GNN), which can be prone to topological shortcuts in graphs with power-law degree distribution. In this manner, UPNA learns a significant part of the latent graph generation mechanism since the learned function can be used to add incoming nodes to a growing graph. By leveraging pre-trained node attributes, we overcome observational bias and make meaningful predictions about unobserved nodes, surpassing state-of-the-art performance (3X to 34X improvement on benchmark datasets). UPNA can be applied to various pairwise learning tasks and integrated with existing link prediction models to enhance their generalizability and bolster graph generative models.

* 17 pages, 6 figures 
Viaarxiv icon

Can Euclidean Symmetry be Leveraged in Reinforcement Learning and Planning?

Jul 17, 2023
Linfeng Zhao, Owen Howell, Jung Yeon Park, Xupeng Zhu, Robin Walters, Lawson L. S. Wong

Figure 1 for Can Euclidean Symmetry be Leveraged in Reinforcement Learning and Planning?
Figure 2 for Can Euclidean Symmetry be Leveraged in Reinforcement Learning and Planning?
Figure 3 for Can Euclidean Symmetry be Leveraged in Reinforcement Learning and Planning?
Figure 4 for Can Euclidean Symmetry be Leveraged in Reinforcement Learning and Planning?

In robotic tasks, changes in reference frames typically do not influence the underlying physical properties of the system, which has been known as invariance of physical laws.These changes, which preserve distance, encompass isometric transformations such as translations, rotations, and reflections, collectively known as the Euclidean group. In this work, we delve into the design of improved learning algorithms for reinforcement learning and planning tasks that possess Euclidean group symmetry. We put forth a theory on that unify prior work on discrete and continuous symmetry in reinforcement learning, planning, and optimal control. Algorithm side, we further extend the 2D path planning with value-based planning to continuous MDPs and propose a pipeline for constructing equivariant sampling-based planning algorithms. Our work is substantiated with empirical evidence and illustrated through examples that explain the benefits of equivariance to Euclidean symmetry in tackling natural control problems.

* Preprint. Website: http://lfzhao.com/SymCtrl 
Viaarxiv icon

Equivariant Single View Pose Prediction Via Induced and Restricted Representations

Jul 07, 2023
Owen Howell, David Klee, Ondrej Biza, Linfeng Zhao, Robin Walters

Figure 1 for Equivariant Single View Pose Prediction Via Induced and Restricted Representations
Figure 2 for Equivariant Single View Pose Prediction Via Induced and Restricted Representations
Figure 3 for Equivariant Single View Pose Prediction Via Induced and Restricted Representations
Figure 4 for Equivariant Single View Pose Prediction Via Induced and Restricted Representations

Learning about the three-dimensional world from two-dimensional images is a fundamental problem in computer vision. An ideal neural network architecture for such tasks would leverage the fact that objects can be rotated and translated in three dimensions to make predictions about novel images. However, imposing SO(3)-equivariance on two-dimensional inputs is difficult because the group of three-dimensional rotations does not have a natural action on the two-dimensional plane. Specifically, it is possible that an element of SO(3) will rotate an image out of plane. We show that an algorithm that learns a three-dimensional representation of the world from two dimensional images must satisfy certain geometric consistency properties which we formulate as SO(2)-equivariance constraints. We use the induced and restricted representations of SO(2) on SO(3) to construct and classify architectures which satisfy these geometric consistency constraints. We prove that any architecture which respects said consistency constraints can be realized as an instance of our construction. We show that three previously proposed neural architectures for 3D pose prediction are special cases of our construction. We propose a new algorithm that is a learnable generalization of previously considered methods. We test our architecture on three pose predictions task and achieve SOTA results on both the PASCAL3D+ and SYMSOL pose estimation tasks.

Viaarxiv icon

One-shot Imitation Learning via Interaction Warping

Jun 21, 2023
Ondrej Biza, Skye Thompson, Kishore Reddy Pagidi, Abhinav Kumar, Elise van der Pol, Robin Walters, Thomas Kipf, Jan-Willem van de Meent, Lawson L. S. Wong, Robert Platt

Figure 1 for One-shot Imitation Learning via Interaction Warping
Figure 2 for One-shot Imitation Learning via Interaction Warping
Figure 3 for One-shot Imitation Learning via Interaction Warping
Figure 4 for One-shot Imitation Learning via Interaction Warping

Imitation learning of robot policies from few demonstrations is crucial in open-ended applications. We propose a new method, Interaction Warping, for learning SE(3) robotic manipulation policies from a single demonstration. We infer the 3D mesh of each object in the environment using shape warping, a technique for aligning point clouds across object instances. Then, we represent manipulation actions as keypoints on objects, which can be warped with the shape of the object. We show successful one-shot imitation learning on three simulated and real-world object re-arrangement tasks. We also demonstrate the ability of our method to predict object meshes and robot grasps in the wild.

Viaarxiv icon

On Robot Grasp Learning Using Equivariant Models

Jun 10, 2023
Xupeng Zhu, Dian Wang, Guanang Su, Ondrej Biza, Robin Walters, Robert Platt

Figure 1 for On Robot Grasp Learning Using Equivariant Models
Figure 2 for On Robot Grasp Learning Using Equivariant Models
Figure 3 for On Robot Grasp Learning Using Equivariant Models
Figure 4 for On Robot Grasp Learning Using Equivariant Models

Real-world grasp detection is challenging due to the stochasticity in grasp dynamics and the noise in hardware. Ideally, the system would adapt to the real world by training directly on physical systems. However, this is generally difficult due to the large amount of training data required by most grasp learning models. In this paper, we note that the planar grasp function is $\SE(2)$-equivariant and demonstrate that this structure can be used to constrain the neural network used during learning. This creates an inductive bias that can significantly improve the sample efficiency of grasp learning and enable end-to-end training from scratch on a physical robot with as few as $600$ grasp attempts. We call this method Symmetric Grasp learning (SymGrasp) and show that it can learn to grasp ``from scratch'' in less that 1.5 hours of physical robot time.

* Accepted in Autonomous Robot. arXiv admin note: substantial text overlap with arXiv:2202.09468 
Viaarxiv icon

Improving Convergence and Generalization Using Parameter Symmetries

May 22, 2023
Bo Zhao, Robert M. Gower, Robin Walters, Rose Yu

Figure 1 for Improving Convergence and Generalization Using Parameter Symmetries
Figure 2 for Improving Convergence and Generalization Using Parameter Symmetries
Figure 3 for Improving Convergence and Generalization Using Parameter Symmetries
Figure 4 for Improving Convergence and Generalization Using Parameter Symmetries

In overparametrized models, different values of the parameters may result in the same loss value. Parameter space symmetries are transformations that change the model parameters but leave the loss invariant. Teleportation applies such transformations to accelerate optimization. However, the exact mechanism behind this algorithm's success is not well understood. In this paper, we show that teleportation not only speeds up optimization in the short-term, but gives overall faster time to convergence. Additionally, we show that teleporting to minima with different curvatures improves generalization and provide insights on the connection between the curvature of the minima and generalization ability. Finally, we show that integrating teleportation into a wide range of optimization algorithms and optimization-based meta-learning improves convergence.

* 29 pages, 13 figures 
Viaarxiv icon

A General Theory of Correct, Incorrect, and Extrinsic Equivariance

Mar 08, 2023
Dian Wang, Xupeng Zhu, Jung Yeon Park, Robert Platt, Robin Walters

Figure 1 for A General Theory of Correct, Incorrect, and Extrinsic Equivariance
Figure 2 for A General Theory of Correct, Incorrect, and Extrinsic Equivariance
Figure 3 for A General Theory of Correct, Incorrect, and Extrinsic Equivariance
Figure 4 for A General Theory of Correct, Incorrect, and Extrinsic Equivariance

Although equivariant machine learning has proven effective at many tasks, success depends heavily on the assumption that the ground truth function is symmetric over the entire domain matching the symmetry in an equivariant neural network. A missing piece in the equivariant learning literature is the analysis of equivariant networks when symmetry exists only partially in the domain. In this work, we present a general theory for such a situation. We propose pointwise definitions of correct, incorrect, and extrinsic equivariance, which allow us to quantify continuously the degree of each type of equivariance a function displays. We then study the impact of various degrees of incorrect or extrinsic symmetry on model error. We prove error lower bounds for invariant or equivariant networks in classification or regression settings with partially incorrect symmetry. We also analyze the potentially harmful effects of extrinsic equivariance. Experiments validate these results in three different environments.

Viaarxiv icon

Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction

Feb 27, 2023
David M. Klee, Ondrej Biza, Robert Platt, Robin Walters

Figure 1 for Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction
Figure 2 for Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction
Figure 3 for Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction
Figure 4 for Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction

Predicting the pose of objects from a single image is an important but difficult computer vision problem. Methods that predict a single point estimate do not predict the pose of objects with symmetries well and cannot represent uncertainty. Alternatively, some works predict a distribution over orientations in $\mathrm{SO}(3)$. However, training such models can be computation- and sample-inefficient. Instead, we propose a novel mapping of features from the image domain to the 3D rotation manifold. Our method then leverages $\mathrm{SO}(3)$ equivariant layers, which are more sample efficient, and outputs a distribution over rotations that can be sampled at arbitrary resolution. We demonstrate the effectiveness of our method at object orientation prediction, and achieve state-of-the-art performance on the popular PASCAL3D+ dataset. Moreover, we show that our method can model complex object symmetries, without any modifications to the parameters or loss function. Code is available at https://dmklee.github.io/image2sphere.

Viaarxiv icon

Generative Adversarial Symmetry Discovery

Feb 08, 2023
Jianke Yang, Robin Walters, Nima Dehmamy, Rose Yu

Figure 1 for Generative Adversarial Symmetry Discovery
Figure 2 for Generative Adversarial Symmetry Discovery
Figure 3 for Generative Adversarial Symmetry Discovery
Figure 4 for Generative Adversarial Symmetry Discovery

Despite the success of equivariant neural networks in scientific applications, they require knowing the symmetry group a priori. However, it may be difficult to know the right symmetry to use as an inductive bias in practice and enforcing the wrong symmetry could hurt the performance. In this paper, we propose a framework, LieGAN, to automatically discover equivariances from a dataset using a paradigm akin to generative adversarial training. Specifically, a generator learns a group of transformations applied to the data, which preserves the original distribution and fools the discriminator. LieGAN represents symmetry as interpretable Lie algebra basis and can discover various symmetries such as rotation group $\mathrm{SO}(n)$ and restricted Lorentz group $\mathrm{SO}(1,3)^+$ in trajectory prediction and top quark tagging tasks. The learned symmetry can also be readily used in several existing equivariant neural networks to improve accuracy and generalization in prediction.

Viaarxiv icon