Alert button
Picture for Sangwoo Moon

Sangwoo Moon

Alert button

Fast and Scalable Signal Inference for Active Robotic Source Seeking

Jan 06, 2023
Christopher E. Denniston, Oriana Peltzer, Joshua Ott, Sangwoo Moon, Sung-Kyun Kim, Gaurav S. Sukhatme, Mykel J. Kochenderfer, Mac Schwager, Ali-akbar Agha-mohammadi

Figure 1 for Fast and Scalable Signal Inference for Active Robotic Source Seeking
Figure 2 for Fast and Scalable Signal Inference for Active Robotic Source Seeking
Figure 3 for Fast and Scalable Signal Inference for Active Robotic Source Seeking
Figure 4 for Fast and Scalable Signal Inference for Active Robotic Source Seeking

In active source seeking, a robot takes repeated measurements in order to locate a signal source in a cluttered and unknown environment. A key component of an active source seeking robot planner is a model that can produce estimates of the signal at unknown locations with uncertainty quantification. This model allows the robot to plan for future measurements in the environment. Traditionally, this model has been in the form of a Gaussian process, which has difficulty scaling and cannot represent obstacles. %In this work, We propose a global and local factor graph model for active source seeking, which allows the model to scale to a large number of measurements and represent unknown obstacles in the environment. We combine this model with extensions to a highly scalable planner to form a system for large-scale active source seeking. We demonstrate that our approach outperforms baseline methods in both simulated and real robot experiments.

* 6 pages, Submitted to ICRA 2023 
Viaarxiv icon

Continual Learning on Noisy Data Streams via Self-Purified Replay

Oct 14, 2021
Chris Dongjoo Kim, Jinseo Jeong, Sangwoo Moon, Gunhee Kim

Figure 1 for Continual Learning on Noisy Data Streams via Self-Purified Replay
Figure 2 for Continual Learning on Noisy Data Streams via Self-Purified Replay
Figure 3 for Continual Learning on Noisy Data Streams via Self-Purified Replay
Figure 4 for Continual Learning on Noisy Data Streams via Self-Purified Replay

Continually learning in the real world must overcome many challenges, among which noisy labels are a common and inevitable issue. In this work, we present a repla-ybased continual learning framework that simultaneously addresses both catastrophic forgetting and noisy labels for the first time. Our solution is based on two observations; (i) forgetting can be mitigated even with noisy labels via self-supervised learning, and (ii) the purity of the replay buffer is crucial. Building on this regard, we propose two key components of our method: (i) a self-supervised replay technique named Self-Replay which can circumvent erroneous training signals arising from noisy labeled data, and (ii) the Self-Centered filter that maintains a purified replay buffer via centrality-based stochastic graph ensembles. The empirical results on MNIST, CIFAR-10, CIFAR-100, and WebVision with real-world noise demonstrate that our framework can maintain a highly pure replay buffer amidst noisy streamed data while greatly outperforming the combinations of the state-of-the-art continual learning and noisy label learning methods. The source code is available at http://vision.snu.ac.kr/projects/SPR

* Published at ICCV 2021 main conference 
Viaarxiv icon

Learning to Schedule Communication in Multi-agent Reinforcement Learning

Feb 05, 2019
Daewoo Kim, Sangwoo Moon, David Hostallero, Wan Ju Kang, Taeyoung Lee, Kyunghwan Son, Yung Yi

Figure 1 for Learning to Schedule Communication in Multi-agent Reinforcement Learning
Figure 2 for Learning to Schedule Communication in Multi-agent Reinforcement Learning
Figure 3 for Learning to Schedule Communication in Multi-agent Reinforcement Learning
Figure 4 for Learning to Schedule Communication in Multi-agent Reinforcement Learning

Many real-world reinforcement learning tasks require multiple agents to make sequential decisions under the agents' interaction, where well-coordinated actions among the agents are crucial to achieve the target goal better at these tasks. One way to accelerate the coordination effect is to enable multiple agents to communicate with each other in a distributed manner and behave as a group. In this paper, we study a practical scenario when (i) the communication bandwidth is limited and (ii) the agents share the communication medium so that only a restricted number of agents are able to simultaneously use the medium, as in the state-of-the-art wireless networking standards. This calls for a certain form of communication scheduling. In that regard, we propose a multi-agent deep reinforcement learning framework, called SchedNet, in which agents learn how to schedule themselves, how to encode the messages, and how to select actions based on received messages. SchedNet is capable of deciding which agents should be entitled to broadcasting their (encoded) messages, by learning the importance of each agent's partially observed information. We evaluate SchedNet against multiple baselines under two different applications, namely, cooperative communication and navigation, and predator-prey. Our experiments show a non-negligible performance gap between SchedNet and other mechanisms such as the ones without communication and with vanilla scheduling methods, e.g., round robin, ranging from 32% to 43%.

* Accepted in ICLR 2019 
Viaarxiv icon