Alert button
Picture for Sanghyuk Chun

Sanghyuk Chun

Alert button

Longer-range Contextualized Masked Autoencoder

Oct 20, 2023
Taekyung Kim, Sanghyuk Chun, Byeongho Heo, Dongyoon Han

Masked image modeling (MIM) has emerged as a promising self-supervised learning (SSL) strategy. The MIM pre-training facilitates learning powerful representations using an encoder-decoder framework by randomly masking some input pixels and reconstructing the masked pixels from the remaining ones. However, as the encoder is trained with partial pixels, the MIM pre-training can suffer from a low capability of understanding long-range dependency. This limitation may hinder its capability to fully understand multiple-range dependencies, resulting in narrow highlighted regions in the attention map that may incur accuracy drops. To mitigate the limitation, We propose a self-supervised learning framework, named Longer-range Contextualized Masked Autoencoder (LC-MAE). LC-MAE effectively leverages a global context understanding of visual representations while simultaneously reducing the spatial redundancy of input at the same time. Our method steers the encoder to learn from entire pixels in multiple views while also learning local representation from sparse pixels. As a result, LC-MAE learns more discriminative representations, leading to a performance improvement of achieving 84.2% top-1 accuracy with ViT-B on ImageNet-1K with 0.6%p gain. We attribute the success to the enhanced pre-training method, as evidenced by the singular value spectrum and attention analyses. Finally, LC-MAE achieves significant performance gains at the downstream semantic segmentation and fine-grained visual classification tasks; and on diverse robust evaluation metrics. Our code will be publicly available.

Viaarxiv icon

Improved Probabilistic Image-Text Representations

May 29, 2023
Sanghyuk Chun

Figure 1 for Improved Probabilistic Image-Text Representations
Figure 2 for Improved Probabilistic Image-Text Representations
Figure 3 for Improved Probabilistic Image-Text Representations
Figure 4 for Improved Probabilistic Image-Text Representations

Image-Text Matching (ITM) task, a fundamental vision-language (VL) task, suffers from the inherent ambiguity arising from multiplicity and imperfect annotations. Deterministic functions are not sufficiently powerful to capture ambiguity, prompting the exploration of probabilistic embeddings to tackle the challenge. However, the existing probabilistic ITM approach encounters two key shortcomings; the burden of heavy computations due to the Monte Carlo approximation, and the loss saturation issue in the face of abundant false negatives. To overcome the issues, this paper presents an improved Probabilistic Cross-Modal Embeddings (named PCME++) by introducing a new probabilistic distance with a closed-form solution. In addition, two optimization techniques are proposed to enhance PCME++ further; first, the incorporation of pseudo-positives to prevent the loss saturation problem under massive false negatives; second, mixed sample data augmentation for probabilistic matching. Experimental results on MS-COCO Caption and two extended benchmarks, CxC and ECCV Caption, demonstrate the effectiveness of PCME++ compared to state-of-the-art ITM methods. The robustness of PCME++ is also evaluated under noisy image-text correspondences. In addition, the potential applicability of PCME++ in automatic prompt tuning for zero-shot classification is shown. The code is available at https://naver-ai.github.io/pcmepp/.

* Code: https://github.com/naver-ai/pcmepp. Project page: https://naver-ai.github.io/pcmepp/. 21 pages, 1.2 MB 
Viaarxiv icon

RoCOCO: Robust Benchmark MS-COCO to Stress-test Robustness of Image-Text Matching Models

Apr 21, 2023
Seulki Park, Daeho Um, Hajung Yoon, Sanghyuk Chun, Sangdoo Yun, Jin Young Choi

Figure 1 for RoCOCO: Robust Benchmark MS-COCO to Stress-test Robustness of Image-Text Matching Models
Figure 2 for RoCOCO: Robust Benchmark MS-COCO to Stress-test Robustness of Image-Text Matching Models
Figure 3 for RoCOCO: Robust Benchmark MS-COCO to Stress-test Robustness of Image-Text Matching Models
Figure 4 for RoCOCO: Robust Benchmark MS-COCO to Stress-test Robustness of Image-Text Matching Models

Recently, large-scale vision-language pre-training models and visual semantic embedding methods have significantly improved image-text matching (ITM) accuracy on MS COCO 5K test set. However, it is unclear how robust these state-of-the-art (SOTA) models are when using them in the wild. In this paper, we propose a novel evaluation benchmark to stress-test the robustness of ITM models. To this end, we add various fooling images and captions to a retrieval pool. Specifically, we change images by inserting unrelated images, and change captions by substituting a noun, which can change the meaning of a sentence. We discover that just adding these newly created images and captions to the test set can degrade performances (i.e., Recall@1) of a wide range of SOTA models (e.g., 81.9% $\rightarrow$ 64.5% in BLIP, 66.1% $\rightarrow$ 37.5% in VSE$\infty$). We expect that our findings can provide insights for improving the robustness of the vision-language models and devising more diverse stress-test methods in cross-modal retrieval task. Source code and dataset will be available at https://github.com/pseulki/rococo.

Viaarxiv icon

Three Recipes for Better 3D Pseudo-GTs of 3D Human Mesh Estimation in the Wild

Apr 10, 2023
Gyeongsik Moon, Hongsuk Choi, Sanghyuk Chun, Jiyoung Lee, Sangdoo Yun

Figure 1 for Three Recipes for Better 3D Pseudo-GTs of 3D Human Mesh Estimation in the Wild
Figure 2 for Three Recipes for Better 3D Pseudo-GTs of 3D Human Mesh Estimation in the Wild
Figure 3 for Three Recipes for Better 3D Pseudo-GTs of 3D Human Mesh Estimation in the Wild
Figure 4 for Three Recipes for Better 3D Pseudo-GTs of 3D Human Mesh Estimation in the Wild

Recovering 3D human mesh in the wild is greatly challenging as in-the-wild (ITW) datasets provide only 2D pose ground truths (GTs). Recently, 3D pseudo-GTs have been widely used to train 3D human mesh estimation networks as the 3D pseudo-GTs enable 3D mesh supervision when training the networks on ITW datasets. However, despite the great potential of the 3D pseudo-GTs, there has been no extensive analysis that investigates which factors are important to make more beneficial 3D pseudo-GTs. In this paper, we provide three recipes to obtain highly beneficial 3D pseudo-GTs of ITW datasets. The main challenge is that only 2D-based weak supervision is allowed when obtaining the 3D pseudo-GTs. Each of our three recipes addresses the challenge in each aspect: depth ambiguity, sub-optimality of weak supervision, and implausible articulation. Experimental results show that simply re-training state-of-the-art networks with our new 3D pseudo-GTs elevates their performance to the next level without bells and whistles. The 3D pseudo-GT is publicly available in https://github.com/mks0601/NeuralAnnot_RELEASE.

* Published at CVPRW 2023 
Viaarxiv icon

CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion

Mar 21, 2023
Geonmo Gu, Sanghyuk Chun, Wonjae Kim, HeeJae Jun, Yoohoon Kang, Sangdoo Yun

Figure 1 for CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion
Figure 2 for CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion
Figure 3 for CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion
Figure 4 for CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion

This paper proposes a novel diffusion-based model, CompoDiff, for solving Composed Image Retrieval (CIR) with latent diffusion and presents a newly created dataset of 18 million reference images, conditions, and corresponding target image triplets to train the model. CompoDiff not only achieves a new zero-shot state-of-the-art on a CIR benchmark such as FashionIQ but also enables a more versatile CIR by accepting various conditions, such as negative text and image mask conditions, which are unavailable with existing CIR methods. In addition, the CompoDiff features are on the intact CLIP embedding space so that they can be directly used for all existing models exploiting the CLIP space. The code and dataset used for the training, and the pre-trained weights are available at https://github.com/navervision/CompoDiff

* First two authors contributed equally; 23 pages, 4.8MB 
Viaarxiv icon

SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage

Mar 20, 2023
Song Park, Sanghyuk Chun, Byeongho Heo, Wonjae Kim, Sangdoo Yun

Figure 1 for SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage
Figure 2 for SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage
Figure 3 for SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage
Figure 4 for SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage

We need billion-scale images to achieve more generalizable and ground-breaking vision models, as well as massive dataset storage to ship the images (e.g., the LAION-4B dataset needs 240TB storage space). However, it has become challenging to deal with unlimited dataset storage with limited storage infrastructure. A number of storage-efficient training methods have been proposed to tackle the problem, but they are rarely scalable or suffer from severe damage to performance. In this paper, we propose a storage-efficient training strategy for vision classifiers for large-scale datasets (e.g., ImageNet) that only uses 1024 tokens per instance without using the raw level pixels; our token storage only needs <1% of the original JPEG-compressed raw pixels. We also propose token augmentations and a Stem-adaptor module to make our approach able to use the same architecture as pixel-based approaches with only minimal modifications on the stem layer and the carefully tuned optimization settings. Our experimental results on ImageNet-1k show that our method significantly outperforms other storage-efficient training methods with a large gap. We further show the effectiveness of our method in other practical scenarios, storage-efficient pre-training, and continual learning. Code is available at https://github.com/naver-ai/seit

* First two authors contributed equally; 15 pages, 1.1MB 
Viaarxiv icon

Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization

Mar 01, 2023
Sangwon Jung, Taeeon Park, Sanghyuk Chun, Taesup Moon

Figure 1 for Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization
Figure 2 for Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization
Figure 3 for Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization
Figure 4 for Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization

Many existing group fairness-aware training methods aim to achieve the group fairness by either re-weighting underrepresented groups based on certain rules or using weakly approximated surrogates for the fairness metrics in the objective as regularization terms. Although each of the learning schemes has its own strength in terms of applicability or performance, respectively, it is difficult for any method in the either category to be considered as a gold standard since their successful performances are typically limited to specific cases. To that end, we propose a principled method, dubbed as \ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective using a class wise distributionally robust optimization (DRO) framework. We then develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group. Our experiments show that FairDRO is scalable and easily adaptable to diverse applications, and consistently achieves the state-of-the-art performance on several benchmark datasets in terms of the accuracy-fairness trade-off, compared to recent strong baselines.

Viaarxiv icon

Group Generalized Mean Pooling for Vision Transformer

Dec 08, 2022
Byungsoo Ko, Han-Gyu Kim, Byeongho Heo, Sangdoo Yun, Sanghyuk Chun, Geonmo Gu, Wonjae Kim

Figure 1 for Group Generalized Mean Pooling for Vision Transformer
Figure 2 for Group Generalized Mean Pooling for Vision Transformer
Figure 3 for Group Generalized Mean Pooling for Vision Transformer
Figure 4 for Group Generalized Mean Pooling for Vision Transformer

Vision Transformer (ViT) extracts the final representation from either class token or an average of all patch tokens, following the architecture of Transformer in Natural Language Processing (NLP) or Convolutional Neural Networks (CNNs) in computer vision. However, studies for the best way of aggregating the patch tokens are still limited to average pooling, while widely-used pooling strategies, such as max and GeM pooling, can be considered. Despite their effectiveness, the existing pooling strategies do not consider the architecture of ViT and the channel-wise difference in the activation maps, aggregating the crucial and trivial channels with the same importance. In this paper, we present Group Generalized Mean (GGeM) pooling as a simple yet powerful pooling strategy for ViT. GGeM divides the channels into groups and computes GeM pooling with a shared pooling parameter per group. As ViT groups the channels via a multi-head attention mechanism, grouping the channels by GGeM leads to lower head-wise dependence while amplifying important channels on the activation maps. Exploiting GGeM shows 0.1%p to 0.7%p performance boosts compared to the baselines and achieves state-of-the-art performance for ViT-Base and ViT-Large models in ImageNet-1K classification task. Moreover, GGeM outperforms the existing pooling strategies on image retrieval and multi-modal representation learning tasks, demonstrating the superiority of GGeM for a variety of tasks. GGeM is a simple algorithm in that only a few lines of code are necessary for implementation.

Viaarxiv icon

Similarity of Neural Architectures Based on Input Gradient Transferability

Oct 20, 2022
Jaehui Hwang, Dongyoon Han, Byeongho Heo, Song Park, Sanghyuk Chun, Jong-Seok Lee

Figure 1 for Similarity of Neural Architectures Based on Input Gradient Transferability
Figure 2 for Similarity of Neural Architectures Based on Input Gradient Transferability
Figure 3 for Similarity of Neural Architectures Based on Input Gradient Transferability
Figure 4 for Similarity of Neural Architectures Based on Input Gradient Transferability

In this paper, we aim to design a quantitative similarity function between two neural architectures. Specifically, we define a model similarity using input gradient transferability. We generate adversarial samples of two networks and measure the average accuracy of the networks on adversarial samples of each other. If two networks are highly correlated, then the attack transferability will be high, resulting in high similarity. Using the similarity score, we investigate two topics: (1) Which network component contributes to the model diversity? (2) How does model diversity affect practical scenarios? We answer the first question by providing feature importance analysis and clustering analysis. The second question is validated by two different scenarios: model ensemble and knowledge distillation. Our findings show that model diversity takes a key role when interacting with different neural architectures. For example, we found that more diversity leads to better ensemble performance. We also observe that the relationship between teacher and student networks and distillation performance depends on the choice of the base architecture of the teacher and student networks. We expect our analysis tool helps a high-level understanding of differences between various neural architectures as well as practical guidance when using multiple architectures.

* 21pages, 10 figures, 1.5MB 
Viaarxiv icon

A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective

Aug 21, 2022
Chanwoo Park, Sangdoo Yun, Sanghyuk Chun

Figure 1 for A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective
Figure 2 for A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective
Figure 3 for A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective
Figure 4 for A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective

We propose the first unified theoretical analysis of mixed sample data augmentation (MSDA), such as Mixup and CutMix. Our theoretical results show that regardless of the choice of the mixing strategy, MSDA behaves as a pixel-level regularization of the underlying training loss and a regularization of the first layer parameters. Similarly, our theoretical results support that the MSDA training strategy can improve adversarial robustness and generalization compared to the vanilla training strategy. Using the theoretical results, we provide a high-level understanding of how different design choices of MSDA work differently. For example, we show that the most popular MSDA methods, Mixup and CutMix, behave differently, e.g., CutMix regularizes the input gradients by pixel distances, while Mixup regularizes the input gradients regardless of pixel distances. Our theoretical results also show that the optimal MSDA strategy depends on tasks, datasets, or model parameters. From these observations, we propose generalized MSDAs, a Hybrid version of Mixup and CutMix (HMix) and Gaussian Mixup (GMix), simple extensions of Mixup and CutMix. Our implementation can leverage the advantages of Mixup and CutMix, while our implementation is very efficient, and the computation cost is almost neglectable as Mixup and CutMix. Our empirical study shows that our HMix and GMix outperform the previous state-of-the-art MSDA methods in CIFAR-100 and ImageNet classification tasks. Source code is available at https://github.com/naver-ai/hmix-gmix

* First two authors contributed equally; 29 pages 
Viaarxiv icon