Alert button
Picture for Soon-Jo Chung

Soon-Jo Chung

Alert button

Hierarchical Meta-learning-based Adaptive Controller

Nov 23, 2023
Fengze Xie, Guanya Shi, Michael O'Connell, Yisong Yue, Soon-Jo Chung

We study how to design learning-based adaptive controllers that enable fast and accurate online adaptation in changing environments. In these settings, learning is typically done during an initial (offline) design phase, where the vehicle is exposed to different environmental conditions and disturbances (e.g., a drone exposed to different winds) to collect training data. Our work is motivated by the observation that real-world disturbances fall into two categories: 1) those that can be directly monitored or controlled during training, which we call "manageable", and 2) those that cannot be directly measured or controlled (e.g., nominal model mismatch, air plate effects, and unpredictable wind), which we call "latent". Imprecise modeling of these effects can result in degraded control performance, particularly when latent disturbances continuously vary. This paper presents the Hierarchical Meta-learning-based Adaptive Controller (HMAC) to learn and adapt to such multi-source disturbances. Within HMAC, we develop two techniques: 1) Hierarchical Iterative Learning, which jointly trains representations to caption the various sources of disturbances, and 2) Smoothed Streaming Meta-Learning, which learns to capture the evolving structure of latent disturbances over time (in addition to standard meta-learning on the manageable disturbances). Experimental results demonstrate that HMAC exhibits more precise and rapid adaptation to multi-source disturbances than other adaptive controllers.

* Submitted to ICRA 2024 
Viaarxiv icon

Joint-Space Multi-Robot Motion Planning with Learned Decentralized Heuristics

Nov 21, 2023
Fengze Xie, Marcus Dominguez-Kuhne, Benjamin Riviere, Jialin Song, Wolfgang Hönig, Soon-Jo Chung, Yisong Yue

In this paper, we present a method of multi-robot motion planning by biasing centralized, sampling-based tree search with decentralized, data-driven steer and distance heuristics. Over a range of robot and obstacle densities, we evaluate the plain Rapidly-expanding Random Trees (RRT), and variants of our method for double integrator dynamics. We show that whereas plain RRT fails in every instance to plan for $4$ robots, our method can plan for up to 16 robots, corresponding to searching through a very large 65-dimensional space, which validates the effectiveness of data-driven heuristics at combating exponential search space growth. We also find that the heuristic information is complementary; using both heuristics produces search trees with lower failure rates, nodes, and path costs when compared to using each in isolation. These results illustrate the effective decomposition of high-dimensional joint-space motion planning problems into local problems.

Viaarxiv icon

RGB-X Object Detection via Scene-Specific Fusion Modules

Oct 30, 2023
Sri Aditya Deevi, Connor Lee, Lu Gan, Sushruth Nagesh, Gaurav Pandey, Soon-Jo Chung

Multimodal deep sensor fusion has the potential to enable autonomous vehicles to visually understand their surrounding environments in all weather conditions. However, existing deep sensor fusion methods usually employ convoluted architectures with intermingled multimodal features, requiring large coregistered multimodal datasets for training. In this work, we present an efficient and modular RGB-X fusion network that can leverage and fuse pretrained single-modal models via scene-specific fusion modules, thereby enabling joint input-adaptive network architectures to be created using small, coregistered multimodal datasets. Our experiments demonstrate the superiority of our method compared to existing works on RGB-thermal and RGB-gated datasets, performing fusion using only a small amount of additional parameters. Our code is available at https://github.com/dsriaditya999/RGBXFusion.

* Accepted to 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024) 
Viaarxiv icon

Online Self-Supervised Thermal Water Segmentation for Aerial Vehicles

Jul 18, 2023
Connor Lee, Jonathan Gustafsson Frennert, Lu Gan, Matthew Anderson, Soon-Jo Chung

We present a new method to adapt an RGB-trained water segmentation network to target-domain aerial thermal imagery using online self-supervision by leveraging texture and motion cues as supervisory signals. This new thermal capability enables current autonomous aerial robots operating in near-shore environments to perform tasks such as visual navigation, bathymetry, and flow tracking at night. Our method overcomes the problem of scarce and difficult-to-obtain near-shore thermal data that prevents the application of conventional supervised and unsupervised methods. In this work, we curate the first aerial thermal near-shore dataset, show that our approach outperforms fully-supervised segmentation models trained on limited target-domain thermal data, and demonstrate real-time capabilities onboard an Nvidia Jetson embedded computing platform. Code and datasets used in this work will be available at: https://github.com/connorlee77/uav-thermal-water-segmentation.

* 8 pages, 4 figures, 3 tables 
Viaarxiv icon

CART: Collision Avoidance and Robust Tracking Augmentation in Learning-based Motion Planning for Multi-Agent Systems

Jul 13, 2023
Hiroyasu Tsukamoto, Benjamin Rivière, Changrak Choi, Amir Rahmani, Soon-Jo Chung

This paper presents CART, an analytical method to augment a learning-based, distributed motion planning policy of a nonlinear multi-agent system with real-time collision avoidance and robust tracking guarantees, independently of learning errors. We first derive an analytical form of an optimal safety filter for Lagrangian systems, which formally ensures a collision-free operation in a multi-agent setting in a disturbance-free environment, while allowing for its distributed implementation with minimal deviation from the learned policy. We then propose an analytical form of an optimal robust filter for Lagrangian systems to be used hierarchically with the learned collision-free target trajectory, which also enables distributed implementation and guarantees exponential boundedness of the trajectory tracking error for safety, even under the presence of deterministic and stochastic disturbance. These results are shown to extend further to general control-affine nonlinear systems using contraction theory. Our key contribution is to enhance the performance of the learned motion planning policy with collision avoidance and tracking-based robustness guarantees, independently of its original performance such as approximation errors and regret bounds in machine learning. We demonstrate the effectiveness of CART in motion planning and control of several examples of nonlinear systems, including spacecraft formation flying and rotor-failed UAV swarms.

* IEEE Conference on Decision and Control (CDC), Preprint Version, Accepted July, 2023 
Viaarxiv icon

Interstellar Object Accessibility and Mission Design

Oct 26, 2022
Benjamin P. S. Donitz, Declan Mages, Hiroyasu Tsukamoto, Peter Dixon, Damon Landau, Soon-Jo Chung, Erica Bufanda, Michel Ingham, Julie Castillo-Rogez

Figure 1 for Interstellar Object Accessibility and Mission Design
Figure 2 for Interstellar Object Accessibility and Mission Design
Figure 3 for Interstellar Object Accessibility and Mission Design
Figure 4 for Interstellar Object Accessibility and Mission Design

Interstellar objects (ISOs) are fascinating and under-explored celestial objects, providing physical laboratories to understand the formation of our solar system and probe the composition and properties of material formed in exoplanetary systems. This paper will discuss the accessibility of and mission design to ISOs with varying characteristics, including a discussion of state covariance estimation over the course of a cruise, handoffs from traditional navigation approaches to novel autonomous navigation for fast flyby regimes, and overall recommendations about preparing for the future in situ exploration of these targets. The lessons learned also apply to the fast flyby of other small bodies including long-period comets and potentially hazardous asteroids, which also require a tactical response with similar characteristics

* Accepted at IEEE Aerospace Conference 
Viaarxiv icon

Unsupervised RGB-to-Thermal Domain Adaptation via Multi-Domain Attention Network

Oct 09, 2022
Lu Gan, Connor Lee, Soon-Jo Chung

Figure 1 for Unsupervised RGB-to-Thermal Domain Adaptation via Multi-Domain Attention Network
Figure 2 for Unsupervised RGB-to-Thermal Domain Adaptation via Multi-Domain Attention Network
Figure 3 for Unsupervised RGB-to-Thermal Domain Adaptation via Multi-Domain Attention Network
Figure 4 for Unsupervised RGB-to-Thermal Domain Adaptation via Multi-Domain Attention Network

This work presents a new method for unsupervised thermal image classification and semantic segmentation by transferring knowledge from the RGB domain using a multi-domain attention network. Our method does not require any thermal annotations or co-registered RGB-thermal pairs, enabling robots to perform visual tasks at night and in adverse weather conditions without incurring additional costs of data labeling and registration. Current unsupervised domain adaptation methods look to align global images or features across domains. However, when the domain shift is significantly larger for cross-modal data, not all features can be transferred. We solve this problem by using a shared backbone network that promotes generalization, and domain-specific attention that reduces negative transfer by attending to domain-invariant and easily-transferable features. Our approach outperforms the state-of-the-art RGB-to-thermal adaptation method in classification benchmarks, and is successfully applied to thermal river scene segmentation using only synthetic RGB images. Our code is made publicly available at https://github.com/ganlumomo/thermal-uda-attention.

Viaarxiv icon

Neural-Rendezvous: Learning-based Robust Guidance and Control to Encounter Interstellar Objects

Aug 09, 2022
Hiroyasu Tsukamoto, Soon-Jo Chung, Benjamin Donitz, Michel Ingham, Declan Mages, Yashwanth Kumar Nakka

Figure 1 for Neural-Rendezvous: Learning-based Robust Guidance and Control to Encounter Interstellar Objects
Figure 2 for Neural-Rendezvous: Learning-based Robust Guidance and Control to Encounter Interstellar Objects
Figure 3 for Neural-Rendezvous: Learning-based Robust Guidance and Control to Encounter Interstellar Objects
Figure 4 for Neural-Rendezvous: Learning-based Robust Guidance and Control to Encounter Interstellar Objects

Interstellar objects (ISOs), astronomical objects not gravitationally bound to the Sun, are likely representatives of primitive materials invaluable in understanding exoplanetary star systems. Due to their poorly constrained orbits with generally high inclinations and relative velocities, however, exploring ISOs with conventional human-in-the-loop approaches is significantly challenging. This paper presents Neural-Rendezvous -- a deep learning-based guidance and control framework for encountering any fast-moving objects, including ISOs, robustly, accurately, and autonomously in real-time. It uses pointwise minimum norm tracking control on top of a guidance policy modeled by a spectrally-normalized deep neural network, where its hyperparameters are tuned with a newly introduced loss function directly penalizing the state trajectory tracking error. We rigorously show that, even in the challenging case of ISO exploration, Neural-Rendezvous provides 1) a high probability exponential bound on the expected spacecraft delivery error; and 2) a finite optimality gap with respect to the solution of model predictive control, both of which are indispensable especially for such a critical space mission. In numerical simulations, Neural-Rendezvous is demonstrated to achieve a terminal-time delivery error of less than 0.2 km for 99% of the ISO candidates with realistic state uncertainty, whilst retaining computational efficiency sufficient for real-time implementation.

* Submitted to AIAA Journal of Guidance, Control, and Dynamics 
Viaarxiv icon

Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds

May 13, 2022
Michael O'Connell, Guanya Shi, Xichen Shi, Kamyar Azizzadenesheli, Anima Anandkumar, Yisong Yue, Soon-Jo Chung

Executing safe and precise flight maneuvers in dynamic high-speed winds is important for the ongoing commoditization of uninhabited aerial vehicles (UAVs). However, because the relationship between various wind conditions and its effect on aircraft maneuverability is not well understood, it is challenging to design effective robot controllers using traditional control design methods. We present Neural-Fly, a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning. Neural-Fly builds on two key observations that aerodynamics in different wind conditions share a common representation and that the wind-specific part lies in a low-dimensional space. To that end, Neural-Fly uses a proposed learning algorithm, domain adversarially invariant meta-learning (DAIML), to learn the shared representation, only using 12 minutes of flight data. With the learned representation as a basis, Neural-Fly then uses a composite adaptation law to update a set of linear coefficients for mixing the basis elements. When evaluated under challenging wind conditions generated with the Caltech Real Weather Wind Tunnel, with wind speeds up to 43.6 kilometers/hour (12.1 meters/second), Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers. In addition to strong empirical performance, the exponential stability of Neural-Fly results in robustness guarantees. Last, our control design extrapolates to unseen wind conditions, is shown to be effective for outdoor flights with only onboard sensors, and can transfer across drones with minimal performance degradation.

* This is the accepted version of Science Robotics Vol. 7, Issue 66, eabm6597 (2022). Video: https://youtu.be/TuF9teCZX0U 
Viaarxiv icon