Alert button
Picture for Jack Collins

Jack Collins

Alert button

TWIST: Teacher-Student World Model Distillation for Efficient Sim-to-Real Transfer

Nov 07, 2023
Jun Yamada, Marc Rigter, Jack Collins, Ingmar Posner

Model-based RL is a promising approach for real-world robotics due to its improved sample efficiency and generalization capabilities compared to model-free RL. However, effective model-based RL solutions for vision-based real-world applications require bridging the sim-to-real gap for any world model learnt. Due to its significant computational cost, standard domain randomisation does not provide an effective solution to this problem. This paper proposes TWIST (Teacher-Student World Model Distillation for Sim-to-Real Transfer) to achieve efficient sim-to-real transfer of vision-based model-based RL using distillation. Specifically, TWIST leverages state observations as readily accessible, privileged information commonly garnered from a simulator to significantly accelerate sim-to-real transfer. Specifically, a teacher world model is trained efficiently on state information. At the same time, a matching dataset is collected of domain-randomised image observations. The teacher world model then supervises a student world model that takes the domain-randomised image observations as input. By distilling the learned latent dynamics model from the teacher to the student model, TWIST achieves efficient and effective sim-to-real transfer for vision-based model-based RL tasks. Experiments in simulated and real robotics tasks demonstrate that our approach outperforms naive domain randomisation and model-free methods in terms of sample efficiency and task performance of sim-to-real transfer.

* 7 pages, 6 figures 
Viaarxiv icon

BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance

May 22, 2023
Karel D'Oosterlinck, François Remy, Johannes Deleu, Thomas Demeester, Chris Develder, Klim Zaporojets, Aneiss Ghodsi, Simon Ellershaw, Jack Collins, Christopher Potts

Figure 1 for BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance
Figure 2 for BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance
Figure 3 for BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance
Figure 4 for BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance

Timely and accurate extraction of Adverse Drug Events (ADE) from biomedical literature is paramount for public safety, but involves slow and costly manual labor. We set out to improve drug safety monitoring (pharmacovigilance, PV) through the use of Natural Language Processing (NLP). We introduce BioDEX, a large-scale resource for Biomedical adverse Drug Event Extraction, rooted in the historical output of drug safety reporting in the U.S. BioDEX consists of 65k abstracts and 19k full-text biomedical papers with 256k associated document-level safety reports created by medical experts. The core features of these reports include the reported weight, age, and biological sex of a patient, a set of drugs taken by the patient, the drug dosages, the reactions experienced, and whether the reaction was life threatening. In this work, we consider the task of predicting the core information of the report given its originating paper. We estimate human performance to be 72.0% F1, whereas our best model achieves 62.3% F1, indicating significant headroom on this task. We also begin to explore ways in which these models could help professional PV reviewers. Our code and data are available: https://github.com/KarelDO/BioDEX.

* 28 pages 
Viaarxiv icon

RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning

May 16, 2023
Jack Collins, Mark Robson, Jun Yamada, Mohan Sridharan, Karol Janik, Ingmar Posner

Figure 1 for RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning
Figure 2 for RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning
Figure 3 for RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning
Figure 4 for RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning

We introduce RAMP, an open-source robotics benchmark inspired by real-world industrial assembly tasks. RAMP consists of beams that a robot must assemble into specified goal configurations using pegs as fasteners. As such it assesses planning and execution capabilities, and poses challenges in perception, reasoning, manipulation, diagnostics, fault recovery and goal parsing. RAMP has been designed to be accessible and extensible. Parts are either 3D printed or otherwise constructed from materials that are readily obtainable. The part design and detailed instructions are publicly available. In order to broaden community engagement, RAMP incorporates fixtures such as April Tags which enable researchers to focus on individual sub-tasks of the assembly challenge if desired. We provide a full digital twin as well as rudimentary baselines to enable rapid progress. Our vision is for RAMP to form the substrate for a community-driven endeavour that evolves as capability matures.

* Project website: https://sites.google.com/oxfordrobotics.institute/ramp 
Viaarxiv icon

Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed Environments

Mar 06, 2023
Jun Yamada, Jack Collins, Ingmar Posner

Figure 1 for Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed Environments
Figure 2 for Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed Environments
Figure 3 for Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed Environments
Figure 4 for Efficient Skill Acquisition for Complex Manipulation Tasks in Obstructed Environments

Data efficiency in robotic skill acquisition is crucial for operating robots in varied small-batch assembly settings. To operate in such environments, robots must have robust obstacle avoidance and versatile goal conditioning acquired from only a few simple demonstrations. Existing approaches, however, fall short of these requirements. Deep reinforcement learning (RL) enables a robot to learn complex manipulation tasks but is often limited to small task spaces in the real world due to sample inefficiency and safety concerns. Motion planning (MP) can generate collision-free paths in obstructed environments, but cannot solve complex manipulation tasks and requires goal states often specified by a user or object-specific pose estimator. In this work, we propose a system for efficient skill acquisition that leverages an object-centric generative model (OCGM) for versatile goal identification to specify a goal for MP combined with RL to solve complex manipulation tasks in obstructed environments. Specifically, OCGM enables one-shot target object identification and re-identification in new scenes, allowing MP to guide the robot to the target object while avoiding obstacles. This is combined with a skill transition network, which bridges the gap between terminal states of MP and feasible start states of a sample-efficient RL policy. The experiments demonstrate that our OCGM-based one-shot goal identification provides competitive accuracy to other baseline approaches and that our modular framework outperforms competitive baselines, including a state-of-the-art RL algorithm, by a significant margin for complex manipulation tasks in obstructed environments.

* 8 pages, 5 figures 
Viaarxiv icon

Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space

Mar 06, 2023
Jun Yamada, Chia-Man Hung, Jack Collins, Ioannis Havoutis, Ingmar Posner

Figure 1 for Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space
Figure 2 for Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space
Figure 3 for Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space
Figure 4 for Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space

Motion planning framed as optimisation in structured latent spaces has recently emerged as competitive with traditional methods in terms of planning success while significantly outperforming them in terms of computational speed. However, the real-world applicability of recent work in this domain remains limited by the need to express obstacle information directly in state-space, involving simple geometric primitives. In this work we address this challenge by leveraging learned scene embeddings together with a generative model of the robot manipulator to drive the optimisation process. In addition, we introduce an approach for efficient collision checking which directly regularises the optimisation undertaken for planning. Using simulated as well as real-world experiments, we demonstrate that our approach, AMP-LS, is able to successfully plan in novel, complex scenes while outperforming traditional planning baselines in terms of computation speed by an order of magnitude. We show that the resulting system is fast enough to enable closed-loop planning in real-world dynamic scenes.

* IEEE International Conference on Robotics and Automation (ICRA), 2023  
* Project website: https://amp-ls.github.io/ 
Viaarxiv icon

Follow the Gradient: Crossing the Reality Gap using Differentiable Physics (RealityGrad)

Sep 10, 2021
Jack Collins, Ross Brown, Jürgen Leitner, David Howard

Figure 1 for Follow the Gradient: Crossing the Reality Gap using Differentiable Physics (RealityGrad)
Figure 2 for Follow the Gradient: Crossing the Reality Gap using Differentiable Physics (RealityGrad)
Figure 3 for Follow the Gradient: Crossing the Reality Gap using Differentiable Physics (RealityGrad)
Figure 4 for Follow the Gradient: Crossing the Reality Gap using Differentiable Physics (RealityGrad)

We propose a novel iterative approach for crossing the reality gap that utilises live robot rollouts and differentiable physics. Our method, RealityGrad, demonstrates for the first time, an efficient sim2real transfer in combination with a real2sim model optimisation for closing the reality gap. Differentiable physics has become an alluring alternative to classical rigid-body simulation due to the current culmination of automatic differentiation libraries, compute and non-linear optimisation libraries. Our method builds on this progress and employs differentiable physics for efficient trajectory optimisation. We demonstrate RealitGrad on a dynamic control task for a serial link robot manipulator and present results that show its efficiency and ability to quickly improve not just the robot's performance in real world tasks but also enhance the simulation model for future tasks. One iteration of RealityGrad takes less than 22 minutes on a desktop computer while reducing the error by 2/3, making it efficient compared to other sim2real methods in both compute and time. Our methodology and application of differentiable physics establishes a promising approach for crossing the reality gap and has great potential for scaling to complex environments.

* 8 Pages 
Viaarxiv icon

Traversing the Reality Gap via Simulator Tuning

Mar 03, 2020
Jack Collins, Ross Brown, Jurgen Leitner, David Howard

Figure 1 for Traversing the Reality Gap via Simulator Tuning
Figure 2 for Traversing the Reality Gap via Simulator Tuning
Figure 3 for Traversing the Reality Gap via Simulator Tuning
Figure 4 for Traversing the Reality Gap via Simulator Tuning

The large demand for simulated data has made the reality gap a problem on the forefront of robotics. We propose a method to traverse the gap by tuning available simulation parameters. Through the optimisation of physics engine parameters, we show that we are able to narrow the gap between simulated solutions and a real world dataset, and thus allow more ready transfer of leaned behaviours between the two. We subsequently gain understanding as to the importance of specific simulator parameters, which is of broad interest to the robotic machine learning community. We find that even optimised for different tasks that different physics engine perform better in certain scenarios and that friction and maximum actuator velocity are tightly bounded parameters that greatly impact the transference of simulated solutions.

* 8 Pages, Submitted to IROS2020 
Viaarxiv icon

Benchmarking Simulated Robotic Manipulation through a Real World Dataset

Nov 27, 2019
Jack Collins, Jessie McVicar, David Wedlock, Ross Brown, David Howard, Jürgen Leitner

Figure 1 for Benchmarking Simulated Robotic Manipulation through a Real World Dataset
Figure 2 for Benchmarking Simulated Robotic Manipulation through a Real World Dataset
Figure 3 for Benchmarking Simulated Robotic Manipulation through a Real World Dataset
Figure 4 for Benchmarking Simulated Robotic Manipulation through a Real World Dataset

We present a benchmark to facilitate simulated manipulation; an attempt to overcome the obstacles of physical benchmarks through the distribution of a real world, ground truth dataset. Users are given various simulated manipulation tasks with assigned protocols having the objective of replicating the real world results of a recorded dataset. The benchmark comprises of a range of metrics used to characterise the successes of submitted environments whilst providing insight into their deficiencies. We apply our benchmark to two simulation environments, PyBullet and V-Rep, and publish the results. All materials required to benchmark an environment, including protocols and the dataset, can be found at the benchmarks' website https://research.csiro.au/robotics/manipulation-benchmark/.

* Accepted to the IEEE Robotics and Automation Letters (RA-L) Special Issue: Benchmarking Protocols for Robotic Manipulation (2019) 
Viaarxiv icon

Comparing Direct and Indirect Representations for Environment-Specific Robot Component Design

Jan 21, 2019
Jack Collins, Ben Cottier, David Howard

Figure 1 for Comparing Direct and Indirect Representations for Environment-Specific Robot Component Design
Figure 2 for Comparing Direct and Indirect Representations for Environment-Specific Robot Component Design
Figure 3 for Comparing Direct and Indirect Representations for Environment-Specific Robot Component Design
Figure 4 for Comparing Direct and Indirect Representations for Environment-Specific Robot Component Design

We compare two representations used to define the morphology of legs for a hexapod robot, which are subsequently 3D printed. A leg morphology occupies a set of voxels in a voxel grid. One method, a direct representation, uses a collection of Bezier splines. The second, an indirect method, utilises CPPN-NEAT. In our first experiment, we investigate two strategies to post-process the CPPN output and ensure leg length constraints are met. The first uses an adaptive threshold on the output neuron, the second, previously reported in the literature, scales the largest generated artefact to our desired length. In our second experiment, we build on our past work that evolves the tibia of a hexapod to provide environment-specific performance benefits. We compare the performance of our direct and indirect legs across three distinct environments, represented in a high-fidelity simulator. Results are significant and support our hypothesis that the indirect representation allows for further exploration of the design space leading to improved fitness.

* 8 pages submitted to the 2019 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (Under Review) 
Viaarxiv icon