Alert button
Picture for Yu-Jung Heo

Yu-Jung Heo

Alert button

Biointelligence Laboratory, Department of Computer Science and Engineering, Seoul National University, Seoul, South Korea

SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation

Oct 17, 2022
Woo Suk Choi, Yu-Jung Heo, Byoung-Tak Zhang

Figure 1 for SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation
Figure 2 for SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation
Figure 3 for SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation
Figure 4 for SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation

Scene graph is structured semantic representation that can be modeled as a form of graph from images and texts. Image-based scene graph generation research has been actively conducted until recently, whereas text-based scene graph generation research has not. In this paper, we focus on the problem of scene graph parsing from textual description of a visual scene. The core idea is to use abstract meaning representation (AMR) instead of the dependency parsing mainly used in previous studies. AMR is a graph-based semantic formalism of natural language which abstracts concepts of words in a sentence contrary to the dependency parsing which considers dependency relationships on all words in a sentence. To this end, we design a simple yet effective two-stage scene graph parsing framework utilizing abstract meaning representation, SGRAM (Scene GRaph parsing via Abstract Meaning representation): 1) transforming a textual description of an image into an AMR graph (Text-to-AMR) and 2) encoding the AMR graph into a Transformer-based language model to generate a scene graph (AMR-to-SG). Experimental results show the scene graphs generated by our framework outperforms the dependency parsing-based model by 11.61\% and the previous state-of-the-art model using a pre-trained Transformer language model by 3.78\%. Furthermore, we apply SGRAM to image retrieval task which is one of downstream tasks for scene graph, and confirm the effectiveness of scene graphs generated by our framework.

* 7 pages, 3 figures, 3 tables 
Viaarxiv icon

Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering

Apr 22, 2022
Yu-Jung Heo, Eun-Sol Kim, Woo Suk Choi, Byoung-Tak Zhang

Figure 1 for Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering
Figure 2 for Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering
Figure 3 for Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering
Figure 4 for Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering

Knowledge-based visual question answering (QA) aims to answer a question which requires visually-grounded external knowledge beyond image content itself. Answering complex questions that require multi-hop reasoning under weak supervision is considered as a challenging problem since i) no supervision is given to the reasoning process and ii) high-order semantics of multi-hop knowledge facts need to be captured. In this paper, we introduce a concept of hypergraph to encode high-level semantics of a question and a knowledge base, and to learn high-order associations between them. The proposed model, Hypergraph Transformer, constructs a question hypergraph and a query-aware knowledge hypergraph, and infers an answer by encoding inter-associations between two hypergraphs and intra-associations in both hypergraph itself. Extensive experiments on two knowledge-based visual QA and two knowledge-based textual QA demonstrate the effectiveness of our method, especially for multi-hop reasoning problem. Our source code is available at https://github.com/yujungheo/kbvqa-public.

* Accepted at ACL 2022 
Viaarxiv icon

Toward a Human-Level Video Understanding Intelligence

Oct 18, 2021
Yu-Jung Heo, Minsu Lee, Seongho Choi, Woo Suk Choi, Minjung Shin, Minjoon Jung, Jeh-Kwang Ryu, Byoung-Tak Zhang

Figure 1 for Toward a Human-Level Video Understanding Intelligence
Figure 2 for Toward a Human-Level Video Understanding Intelligence
Figure 3 for Toward a Human-Level Video Understanding Intelligence

We aim to develop an AI agent that can watch video clips and have a conversation with human about the video story. Developing video understanding intelligence is a significantly challenging task, and evaluation methods for adequately measuring and analyzing the progress of AI agent are lacking as well. In this paper, we propose the Video Turing Test to provide effective and practical assessments of video understanding intelligence as well as human-likeness evaluation of AI agents. We define a general format and procedure of the Video Turing Test and present a case study to confirm the effectiveness and usefulness of the proposed test.

* Presented at AI-HRI symposium as part of AAAI-FSS 2021 (arXiv:2109.10836). The first two authors have equal contribution 
Viaarxiv icon

CogME: A Novel Evaluation Metric for Video Understanding Intelligence

Jul 21, 2021
Minjung Shin, Jeonghoon Kim, Seongho Choi, Yu-Jung Heo, Donghyun Kim, Minsu Lee, Byoung-Tak Zhang, Jeh-Kwang Ryu

Figure 1 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Figure 2 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Figure 3 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Figure 4 for CogME: A Novel Evaluation Metric for Video Understanding Intelligence

Developing video understanding intelligence is quite challenging because it requires holistic integration of images, scripts, and sounds based on natural language processing, temporal dependency, and reasoning. Recently, substantial attempts have been made on several video datasets with associated question answering (QA) on a large scale. However, existing evaluation metrics for video question answering (VideoQA) do not provide meaningful analysis. To make progress, we argue that a well-made framework, established on the way humans understand, is required to explain and evaluate the performance of understanding in detail. Then we propose a top-down evaluation system for VideoQA, based on the cognitive process of humans and story elements: Cognitive Modules for Evaluation (CogME). CogME is composed of three cognitive modules: targets, contents, and thinking. The interaction among the modules in the understanding procedure can be expressed in one sentence as follows: "I understand the CONTENT of the TARGET through a way of THINKING." Each module has sub-components derived from the story elements. We can specify the required aspects of understanding by annotating the sub-components to individual questions. CogME thus provides a framework for an elaborated specification of VideoQA datasets. To examine the suitability of a VideoQA dataset for validating video understanding intelligence, we evaluated the baseline model of the DramaQA dataset by applying CogME. The evaluation reveals that story elements are unevenly reflected in the existing dataset, and the model based on the dataset may cause biased predictions. Although this study has only been able to grasp a narrow range of stories, we expect that it offers the first step in considering the cognitive process of humans on the video understanding intelligence of humans and AI.

* 17 pages with 3 figures and 3 tables 
Viaarxiv icon

DramaQA: Character-Centered Video Story Understanding with Hierarchical QA

May 07, 2020
Seongho Choi, Kyoung-Woon On, Yu-Jung Heo, Ahjeong Seo, Youwon Jang, Seungchan Lee, Minsu Lee, Byoung-Tak Zhang

Figure 1 for DramaQA: Character-Centered Video Story Understanding with Hierarchical QA
Figure 2 for DramaQA: Character-Centered Video Story Understanding with Hierarchical QA
Figure 3 for DramaQA: Character-Centered Video Story Understanding with Hierarchical QA
Figure 4 for DramaQA: Character-Centered Video Story Understanding with Hierarchical QA

Despite recent progress on computer vision and natural language processing, developing video understanding intelligence is still hard to achieve due to the intrinsic difficulty of story in video. Moreover, there is not a theoretical metric for evaluating the degree of video understanding. In this paper, we propose a novel video question answering (Video QA) task, DramaQA, for a comprehensive understanding of the video story. The DramaQA focused on two perspectives: 1) hierarchical QAs as an evaluation metric based on the cognitive developmental stages of human intelligence. 2) character-centered video annotations to model local coherence of the story. Our dataset is built upon the TV drama "Another Miss Oh" and it contains 16,191 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels. We provide 217,308 annotated images with rich character-centered annotations, including visual bounding boxes, behaviors, and emotions of main characters, and coreference resolved scripts. Additionally, we provide analyses of the dataset as well as Dual Matching Multistream model which effectively learns character-centered representations of video to answer questions about the video. We are planning to release our dataset and model publicly for research purposes and expect that our work will provide a new perspective on video story understanding research.

* 21 pages, 10 figures, submitted to ECCV 2020 
Viaarxiv icon

Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data

Jan 17, 2020
Kyoung-Woon On, Eun-Sol Kim, Yu-Jung Heo, Byoung-Tak Zhang

Figure 1 for Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data
Figure 2 for Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data
Figure 3 for Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data
Figure 4 for Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data

Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex dependency structures that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for learning video data by discovering these complex structures of the video. The CB-GLNs represent video data as a graph, with nodes and edges corresponding to frames of the video and their dependencies respectively. The CB-GLNs find compositional dependencies of the data in multilevel graph forms via a parameterized kernel with graph-cut and a message passing framework. We evaluate the proposed method on the two different tasks for video understanding: Video theme classification (Youtube-8M dataset) and Video Question and Answering (TVQA dataset). The experimental results show that our model efficiently learns the semantic compositional structure of video data. Furthermore, our model achieves the highest performance in comparison to other baseline methods.

* 8 pages, 3 figures, Association for the Advancement of Artificial Intelligence (AAAI2020). arXiv admin note: substantial text overlap with arXiv:1907.01709 
Viaarxiv icon

Compositional Structure Learning for Sequential Video Data

Jul 03, 2019
Kyoung-Woon On, Eun-Sol Kim, Yu-Jung Heo, Byoung-Tak Zhang

Figure 1 for Compositional Structure Learning for Sequential Video Data
Figure 2 for Compositional Structure Learning for Sequential Video Data

Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex temporal dependencies that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Temporal Dependency Networks (TDNs) for learning video data by discovering these complex structures of the videos. The TDNs represent video as a graph whose nodes and edges correspond to frames of the video and their dependencies respectively. Via a parameterized kernel with graph-cut and graph convolutions, the TDNs find compositional temporal dependencies of the data in multilevel graph forms. We evaluate the proposed method on the large-scale video dataset Youtube-8M. The experimental results show that our model efficiently learns the complex semantic structure of video data.

Viaarxiv icon

Constructing Hierarchical Q&A Datasets for Video Story Understanding

Apr 01, 2019
Yu-Jung Heo, Kyoung-Woon On, Seongho Choi, Jaeseo Lim, Jinah Kim, Jeh-Kwang Ryu, Byung-Chull Bae, Byoung-Tak Zhang

Figure 1 for Constructing Hierarchical Q&A Datasets for Video Story Understanding
Figure 2 for Constructing Hierarchical Q&A Datasets for Video Story Understanding
Figure 3 for Constructing Hierarchical Q&A Datasets for Video Story Understanding

Video understanding is emerging as a new paradigm for studying human-like AI. Question-and-Answering (Q&A) is used as a general benchmark to measure the level of intelligence for video understanding. While several previous studies have suggested datasets for video Q&A tasks, they did not really incorporate story-level understanding, resulting in highly-biased and lack of variance in degree of question difficulty. In this paper, we propose a hierarchical method for building Q&A datasets, i.e. hierarchical difficulty levels. We introduce three criteria for video story understanding, i.e. memory capacity, logical complexity, and DIKW (Data-Information-Knowledge-Wisdom) pyramid. We discuss how three-dimensional map constructed from these criteria can be used as a metric for evaluating the levels of intelligence relating to video story understanding.

* Accepted to AAAI 2019 Spring Symposium Series : Story-Enabled Intelligence 
Viaarxiv icon

Visualizing Semantic Structures of Sequential Data by Learning Temporal Dependencies

Jan 20, 2019
Kyoung-Woon On, Eun-Sol Kim, Yu-Jung Heo, Byoung-Tak Zhang

Figure 1 for Visualizing Semantic Structures of Sequential Data by Learning Temporal Dependencies
Figure 2 for Visualizing Semantic Structures of Sequential Data by Learning Temporal Dependencies

While conventional methods for sequential learning focus on interaction between consecutive inputs, we suggest a new method which captures composite semantic flows with variable-length dependencies. In addition, the semantic structures within given sequential data can be interpreted by visualizing temporal dependencies learned from the method. The proposed method, called Temporal Dependency Network (TDN), represents a video as a temporal graph whose node represents a frame of the video and whose edge represents the temporal dependency between two frames of a variable distance. The temporal dependency structure of semantic is discovered by learning parameterized kernels of graph convolutional methods. We evaluate the proposed method on the large-scale video dataset, Youtube-8M. By visualizing the temporal dependency structures as experimental results, we show that the suggested method can find the temporal dependency structures of video semantic.

* In AAAI-19 Workshop on Network Interpretability for Deep Learning 
Viaarxiv icon