Alert button
Picture for Jinseo Jeong

Jinseo Jeong

Alert button

Continual Learning on Noisy Data Streams via Self-Purified Replay

Oct 14, 2021
Chris Dongjoo Kim, Jinseo Jeong, Sangwoo Moon, Gunhee Kim

Figure 1 for Continual Learning on Noisy Data Streams via Self-Purified Replay
Figure 2 for Continual Learning on Noisy Data Streams via Self-Purified Replay
Figure 3 for Continual Learning on Noisy Data Streams via Self-Purified Replay
Figure 4 for Continual Learning on Noisy Data Streams via Self-Purified Replay

Continually learning in the real world must overcome many challenges, among which noisy labels are a common and inevitable issue. In this work, we present a repla-ybased continual learning framework that simultaneously addresses both catastrophic forgetting and noisy labels for the first time. Our solution is based on two observations; (i) forgetting can be mitigated even with noisy labels via self-supervised learning, and (ii) the purity of the replay buffer is crucial. Building on this regard, we propose two key components of our method: (i) a self-supervised replay technique named Self-Replay which can circumvent erroneous training signals arising from noisy labeled data, and (ii) the Self-Centered filter that maintains a purified replay buffer via centrality-based stochastic graph ensembles. The empirical results on MNIST, CIFAR-10, CIFAR-100, and WebVision with real-world noise demonstrate that our framework can maintain a highly pure replay buffer amidst noisy streamed data while greatly outperforming the combinations of the state-of-the-art continual learning and noisy label learning methods. The source code is available at http://vision.snu.ac.kr/projects/SPR

* Published at ICCV 2021 main conference 
Viaarxiv icon

Imbalanced Continual Learning with Partitioning Reservoir Sampling

Sep 08, 2020
Chris Dongjoo Kim, Jinseo Jeong, Gunhee Kim

Figure 1 for Imbalanced Continual Learning with Partitioning Reservoir Sampling
Figure 2 for Imbalanced Continual Learning with Partitioning Reservoir Sampling
Figure 3 for Imbalanced Continual Learning with Partitioning Reservoir Sampling
Figure 4 for Imbalanced Continual Learning with Partitioning Reservoir Sampling

Continual learning from a sequential stream of data is a crucial challenge for machine learning research. Most studies have been conducted on this topic under the single-label classification setting along with an assumption of balanced label distribution. This work expands this research horizon towards multi-label classification. In doing so, we identify unanticipated adversity innately existent in many multi-label datasets, the long-tailed distribution. We jointly address the two independently solved problems, Catastropic Forgetting and the long-tailed label distribution by first empirically showing a new challenge of destructive forgetting of the minority concepts on the tail. Then, we curate two benchmark datasets, COCOseq and NUS-WIDEseq, that allow the study of both intra- and inter-task imbalances. Lastly, we propose a new sampling strategy for replay-based approach named Partitioning Reservoir Sampling (PRS), which allows the model to maintain a balanced knowledge of both head and tail classes. We publicly release the dataset and the code in our project page.

* Published to ECCV2020 
Viaarxiv icon