Alert button
Picture for Joonmyung Choi

Joonmyung Choi

Alert button

Concept Bottleneck with Visual Concept Filtering for Explainable Medical Image Classification

Aug 23, 2023
Injae Kim, Jongha Kim, Joonmyung Choi, Hyunwoo J. Kim

Interpretability is a crucial factor in building reliable models for various medical applications. Concept Bottleneck Models (CBMs) enable interpretable image classification by utilizing human-understandable concepts as intermediate targets. Unlike conventional methods that require extensive human labor to construct the concept set, recent works leveraging Large Language Models (LLMs) for generating concepts made automatic concept generation possible. However, those methods do not consider whether a concept is visually relevant or not, which is an important factor in computing meaningful concept scores. Therefore, we propose a visual activation score that measures whether the concept contains visual cues or not, which can be easily computed with unlabeled image data. Computed visual activation scores are then used to filter out the less visible concepts, thus resulting in a final concept set with visually meaningful concepts. Our experimental results show that adopting the proposed visual activation score for concept filtering consistently boosts performance compared to the baseline. Moreover, qualitative analyses also validate that visually relevant concepts are successfully selected with the visual activation score.

* Accepted to MedAGI Workshop at MICCAI 2023 (Oral Presentation) 
Viaarxiv icon

MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models

Mar 23, 2023
Dohwan Ko, Joonmyung Choi, Hyeong Kyu Choi, Kyoung-Woon On, Byungseok Roh, Hyunwoo J. Kim

Figure 1 for MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models
Figure 2 for MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models
Figure 3 for MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models
Figure 4 for MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models

Foundation models have shown outstanding performance and generalization capabilities across domains. Since most studies on foundation models mainly focus on the pretraining phase, a naive strategy to minimize a single task-specific loss is adopted for fine-tuning. However, such fine-tuning methods do not fully leverage other losses that are potentially beneficial for the target task. Therefore, we propose MEta Loss TRansformer (MELTR), a plug-in module that automatically and non-linearly combines various loss functions to aid learning the target task via auxiliary learning. We formulate the auxiliary learning as a bi-level optimization problem and present an efficient optimization algorithm based on Approximate Implicit Differentiation (AID). For evaluation, we apply our framework to various video foundation models (UniVL, Violet and All-in-one), and show significant performance gain on all four downstream tasks: text-to-video retrieval, video question answering, video captioning, and multi-modal sentiment analysis. Our qualitative analyses demonstrate that MELTR adequately `transforms' individual loss functions and `melts' them into an effective unified loss. Code is available at https://github.com/mlvlab/MELTR.

* Accepted paper at CVPR 2023 
Viaarxiv icon

TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers

Oct 14, 2022
Hyeong Kyu Choi, Joonmyung Choi, Hyunwoo J. Kim

Figure 1 for TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
Figure 2 for TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
Figure 3 for TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
Figure 4 for TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers

Mixup is a commonly adopted data augmentation technique for image classification. Recent advances in mixup methods primarily focus on mixing based on saliency. However, many saliency detectors require intense computation and are especially burdensome for parameter-heavy transformer models. To this end, we propose TokenMixup, an efficient attention-guided token-level data augmentation method that aims to maximize the saliency of a mixed set of tokens. TokenMixup provides x15 faster saliency-aware data augmentation compared to gradient-based methods. Moreover, we introduce a variant of TokenMixup which mixes tokens within a single instance, thereby enabling multi-scale feature augmentation. Experiments show that our methods significantly improve the baseline models' performance on CIFAR and ImageNet-1K, while being more efficient than previous methods. We also reach state-of-the-art performance on CIFAR-100 among from-scratch transformer models. Code is available at https://github.com/mlvlab/TokenMixup.

* Accepted paper at NeurIPS 2022 
Viaarxiv icon

Video-Text Representation Learning via Differentiable Weak Temporal Alignment

Mar 31, 2022
Dohwan Ko, Joonmyung Choi, Juyeon Ko, Shinyeong Noh, Kyoung-Woon On, Eun-Sol Kim, Hyunwoo J. Kim

Figure 1 for Video-Text Representation Learning via Differentiable Weak Temporal Alignment
Figure 2 for Video-Text Representation Learning via Differentiable Weak Temporal Alignment
Figure 3 for Video-Text Representation Learning via Differentiable Weak Temporal Alignment
Figure 4 for Video-Text Representation Learning via Differentiable Weak Temporal Alignment

Learning generic joint representations for video and text by a supervised method requires a prohibitively substantial amount of manually annotated video datasets. As a practical alternative, a large-scale but uncurated and narrated video dataset, HowTo100M, has recently been introduced. But it is still challenging to learn joint embeddings of video and text in a self-supervised manner, due to its ambiguity and non-sequential alignment. In this paper, we propose a novel multi-modal self-supervised framework Video-Text Temporally Weak Alignment-based Contrastive Learning (VT-TWINS) to capture significant information from noisy and weakly correlated data using a variant of Dynamic Time Warping (DTW). We observe that the standard DTW inherently cannot handle weakly correlated data and only considers the globally optimal alignment path. To address these problems, we develop a differentiable DTW which also reflects local information with weak temporal alignment. Moreover, our proposed model applies a contrastive learning scheme to learn feature representations on weakly correlated data. Our extensive experiments demonstrate that VT-TWINS attains significant improvements in multi-modal representation learning and outperforms various challenging downstream tasks. Code is available at https://github.com/mlvlab/VT-TWINS.

Viaarxiv icon