Alert button
Picture for Hyunwoo Kim

Hyunwoo Kim

Alert button

VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression

Mar 15, 2023
Won Jo, Geuntaek Lim, Gwangjin Lee, Hyunwoo Kim, Byungsoo Ko, Yukyung Choi

Figure 1 for VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression
Figure 2 for VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression
Figure 3 for VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression
Figure 4 for VVS: Video-to-Video Retrieval with Irrelevant Frame Suppression

In content-based video retrieval (CBVR), dealing with large-scale collections, efficiency is as important as accuracy. For this reason, several video-level feature-based studies have actively been conducted; nevertheless, owing to the severe difficulty of embedding a lengthy and untrimmed video into a single feature, these studies have shown insufficient for accurate retrieval compared to frame-level feature-based studies. In this paper, we show an insight that appropriate suppression of irrelevant frames can be a clue to overcome the current obstacles of the video-level feature-based approaches. Furthermore, we propose a Video-to-Video Suppression network (VVS) as a solution. The VVS is an end-to-end framework that consists of an easy distractor elimination stage for identifying which frames to remove and a suppression weight generation stage for determining how much to suppress the remaining frames. This structure is intended to effectively describe an untrimmed video with varying content and meaningless information. Its efficacy is proved via extensive experiments, and we show that our approach is not only state-of-the-art in video-level feature-based approaches but also has a fast inference time despite possessing retrieval capabilities close to those of frame-level feature-based approaches.

Viaarxiv icon

SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization

Dec 20, 2022
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin Choi

Figure 1 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Figure 2 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Figure 3 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Figure 4 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization

We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.

* Dataset, models, and code can be found at https://hyunw.kim/sodaverse 
Viaarxiv icon

Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment

Nov 04, 2022
Dong Hoon Lee, Sungik Choi, Hyunwoo Kim, Sae-Young Chung

Figure 1 for Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment
Figure 2 for Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment
Figure 3 for Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment
Figure 4 for Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment

This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/k-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy.

* NeurIPS 2022 
Viaarxiv icon

Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization

Oct 07, 2022
Jihwan Jeong, Xiaoyu Wang, Michael Gimelfarb, Hyunwoo Kim, Baher Abdulhai, Scott Sanner

Figure 1 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Figure 2 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Figure 3 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Figure 4 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization

Offline reinforcement learning (RL) addresses the problem of learning a performant policy from a fixed batch of data collected by following some behavior policy. Model-based approaches are particularly appealing in the offline setting since they can extract more learning signals from the logged dataset by learning a model of the environment. However, the performance of existing model-based approaches falls short of model-free counterparts, due to the compounding of estimation errors in the learned model. Driven by this observation, we argue that it is critical for a model-based method to understand when to trust the model and when to rely on model-free estimates, and how to act conservatively w.r.t. both. To this end, we derive an elegant and simple methodology called conservative Bayesian model-based value expansion for offline policy optimization (CBOP), that trades off model-free and model-based estimates during the policy evaluation step according to their epistemic uncertainties, and facilitates conservatism by taking a lower bound on the Bayesian posterior value estimate. On the standard D4RL continuous control tasks, we find that our method significantly outperforms previous model-based approaches: e.g., MOPO by $116.4$%, MOReL by $23.2$% and COMBO by $23.7$%. Further, CBOP achieves state-of-the-art performance on $11$ out of $18$ benchmark datasets while doing on par on the remaining datasets.

Viaarxiv icon

Is Continual Learning Truly Learning Representations Continually?

Jun 16, 2022
Sungmin Cha, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

Figure 1 for Is Continual Learning Truly Learning Representations Continually?
Figure 2 for Is Continual Learning Truly Learning Representations Continually?
Figure 3 for Is Continual Learning Truly Learning Representations Continually?
Figure 4 for Is Continual Learning Truly Learning Representations Continually?

Continual learning (CL) aims to learn from sequentially arriving tasks without forgetting previous tasks. Whereas CL algorithms have tried to achieve higher average test accuracy across all the tasks learned so far, learning continuously useful representations is critical for successful generalization and downstream transfer. To measure representational quality, we re-train only the output layers using a small balanced dataset for all the tasks, evaluating the average accuracy without any biased predictions toward the current task. We also test on several downstream tasks, measuring transfer learning accuracy of the learned representations. By testing our new formalism on ImageNet-100 and ImageNet-1000, we find that using more exemplar memory is the only option to make a meaningful difference in learned representations, and most of the regularization- or distillation-based CL algorithms that use the exemplar memory fail to learn continuously useful representations in class-incremental learning. Surprisingly, unsupervised (or self-supervised) CL with sufficient memory size can achieve comparable performance to the supervised counterparts. Considering non-trivial labeling costs, we claim that finding more efficient unsupervised CL algorithms that minimally use exemplary memory would be the next promising direction for CL research.

* Preprint 
Viaarxiv icon

ProsocialDialog: A Prosocial Backbone for Conversational Agents

May 25, 2022
Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, Maarten Sap

Figure 1 for ProsocialDialog: A Prosocial Backbone for Conversational Agents
Figure 2 for ProsocialDialog: A Prosocial Backbone for Conversational Agents
Figure 3 for ProsocialDialog: A Prosocial Backbone for Conversational Agents
Figure 4 for ProsocialDialog: A Prosocial Backbone for Conversational Agents

Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them. To address this issue, we introduce ProsocialDialog, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K RoTs, and 497K dialogue safety labels accompanied by free-form rationales. With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost. Empirical results show that Prost generates more socially acceptable dialogues compared to other state-of-the-art language and dialogue models in both in-domain and out-of-domain settings. Additionally, Canary effectively guides conversational agents and off-the-shelf language models to generate significantly more prosocial responses. Our work highlights the promise and importance of creating and steering conversational AI to be socially responsible.

* 25 pages, 10 figures 
Viaarxiv icon

Perception Prioritized Training of Diffusion Models

Apr 01, 2022
Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, Sungroh Yoon

Figure 1 for Perception Prioritized Training of Diffusion Models
Figure 2 for Perception Prioritized Training of Diffusion Models
Figure 3 for Perception Prioritized Training of Diffusion Models
Figure 4 for Perception Prioritized Training of Diffusion Models

Diffusion models learn to restore noisy data, which is corrupted with different levels of noise, by optimizing the weighted sum of the corresponding loss terms, i.e., denoising score matching loss. In this paper, we show that restoring data corrupted with certain noise levels offers a proper pretext task for the model to learn rich visual concepts. We propose to prioritize such noise levels over other levels during training, by redesigning the weighting scheme of the objective function. We show that our simple redesign of the weighting scheme significantly improves the performance of diffusion models regardless of the datasets, architectures, and sampling strategies.

* CVPR 2022 Code: https://github.com/jychoi118/P2-weighting 
Viaarxiv icon

Bridging the Gap between Classification and Localization for Weakly Supervised Object Localization

Apr 01, 2022
Eunji Kim, Siwon Kim, Jungbeom Lee, Hyunwoo Kim, Sungroh Yoon

Figure 1 for Bridging the Gap between Classification and Localization for Weakly Supervised Object Localization
Figure 2 for Bridging the Gap between Classification and Localization for Weakly Supervised Object Localization
Figure 3 for Bridging the Gap between Classification and Localization for Weakly Supervised Object Localization
Figure 4 for Bridging the Gap between Classification and Localization for Weakly Supervised Object Localization

Weakly supervised object localization aims to find a target object region in a given image with only weak supervision, such as image-level labels. Most existing methods use a class activation map (CAM) to generate a localization map; however, a CAM identifies only the most discriminative parts of a target object rather than the entire object region. In this work, we find the gap between classification and localization in terms of the misalignment of the directions between an input feature and a class-specific weight. We demonstrate that the misalignment suppresses the activation of CAM in areas that are less discriminative but belong to the target object. To bridge the gap, we propose a method to align feature directions with a class-specific weight. The proposed method achieves a state-of-the-art localization performance on the CUB-200-2011 and ImageNet-1K benchmarks.

* CVPR 2022 
Viaarxiv icon

VISOLO: Grid-Based Space-Time Aggregation for Efficient Online Video Instance Segmentation

Dec 08, 2021
Su Ho Han, Sukjun Hwang, Seoung Wug Oh, Yeonchool Park, Hyunwoo Kim, Min-Jung Kim, Seon Joo Kim

Figure 1 for VISOLO: Grid-Based Space-Time Aggregation for Efficient Online Video Instance Segmentation
Figure 2 for VISOLO: Grid-Based Space-Time Aggregation for Efficient Online Video Instance Segmentation
Figure 3 for VISOLO: Grid-Based Space-Time Aggregation for Efficient Online Video Instance Segmentation
Figure 4 for VISOLO: Grid-Based Space-Time Aggregation for Efficient Online Video Instance Segmentation

For online video instance segmentation (VIS), fully utilizing the information from previous frames in an efficient manner is essential for real-time applications. Most previous methods follow a two-stage approach requiring additional computations such as RPN and RoIAlign, and do not fully exploit the available information in the video for all subtasks in VIS. In this paper, we propose a novel single-stage framework for online VIS built based on the grid structured feature representation. The grid-based features allow us to employ fully convolutional networks for real-time processing, and also to easily reuse and share features within different components. We also introduce cooperatively operating modules that aggregate information from available frames, in order to enrich the features for all subtasks in VIS. Our design fully takes advantage of previous information in a grid form for all tasks in VIS in an efficient way, and we achieved the new state-of-the-art accuracy (38.6 AP and 36.9 AP) and speed (40.0 FPS) on YouTube-VIS 2019 and 2021 datasets among online VIS methods.

Viaarxiv icon

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes

Sep 21, 2021
Hyunwoo Kim, Byeongchang Kim, Gunhee Kim

Figure 1 for Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes
Figure 2 for Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes
Figure 3 for Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes
Figure 4 for Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes

Empathy is a complex cognitive ability based on the reasoning of others' affective states. In order to better understand others and express stronger empathy in dialogues, we argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation. However, previous approaches for recognizing emotion cause words in text require sub-utterance level annotations, which can be demanding. Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label. Also, we introduce a novel method based on pragmatics to make dialogue models focus on targeted words in the input during generation. Our method is applicable to any dialogue models with no additional training on the fly. We show our approach improves multiple best-performing dialogue agents on generating more focused empathetic responses in terms of both automatic and human evaluation.

* Accepted at EMNLP 2021 main conference. For the code and dataset, see https://github.com/skywalker023/focused-empathy 
Viaarxiv icon