Alert button
Picture for Sangyoun Lee

Sangyoun Lee

Alert button

Adaptive Graph Convolution Module for Salient Object Detection

Mar 17, 2023
Yongwoo Lee, Minhyeok Lee, Suhwan Cho, Sangyoun Lee

Figure 1 for Adaptive Graph Convolution Module for Salient Object Detection
Figure 2 for Adaptive Graph Convolution Module for Salient Object Detection
Figure 3 for Adaptive Graph Convolution Module for Salient Object Detection
Figure 4 for Adaptive Graph Convolution Module for Salient Object Detection

Salient object detection (SOD) is a task that involves identifying and segmenting the most visually prominent object in an image. Existing solutions can accomplish this use a multi-scale feature fusion mechanism to detect the global context of an image. However, as there is no consideration of the structures in the image nor the relations between distant pixels, conventional methods cannot deal with complex scenes effectively. In this paper, we propose an adaptive graph convolution module (AGCM) to overcome these limitations. Prototype features are initially extracted from the input image using a learnable region generation layer that spatially groups features in the image. The prototype features are then refined by propagating information between them based on a graph architecture, where each feature is regarded as a node. Experimental results show that the proposed AGCM dramatically improves the SOD performance both quantitatively and quantitatively.

* 4 pages, 3 figures 
Viaarxiv icon

Guided Slot Attention for Unsupervised Video Object Segmentation

Mar 15, 2023
Minhyeok Lee, Suhwan Cho, Dogyoon Lee, Chaewon Park, Jungho Lee, Sangyoun Lee

Figure 1 for Guided Slot Attention for Unsupervised Video Object Segmentation
Figure 2 for Guided Slot Attention for Unsupervised Video Object Segmentation
Figure 3 for Guided Slot Attention for Unsupervised Video Object Segmentation
Figure 4 for Guided Slot Attention for Unsupervised Video Object Segmentation

Unsupervised video object segmentation aims to segment the most prominent object in a video sequence. However, the existence of complex backgrounds and multiple foreground objects make this task challenging. To address this issue, we propose a guided slot attention network to reinforce spatial structural information and obtain better foreground--background separation. The foreground and background slots, which are initialized with query guidance, are iteratively refined based on interactions with template information. Furthermore, to improve slot--template interaction and effectively fuse global and local features in the target and reference frames, K-nearest neighbors filtering and a feature aggregation transformer are introduced. The proposed model achieves state-of-the-art performance on two popular datasets. Additionally, we demonstrate the robustness of the proposed model in challenging scenes through various comparative experiments.

Viaarxiv icon

TSANET: Temporal and Scale Alignment for Unsupervised Video Object Segmentation

Mar 08, 2023
Seunghoon Lee, Suhwan Cho, Dogyoon Lee, Minhyeok Lee, Sangyoun Lee

Figure 1 for TSANET: Temporal and Scale Alignment for Unsupervised Video Object Segmentation
Figure 2 for TSANET: Temporal and Scale Alignment for Unsupervised Video Object Segmentation
Figure 3 for TSANET: Temporal and Scale Alignment for Unsupervised Video Object Segmentation
Figure 4 for TSANET: Temporal and Scale Alignment for Unsupervised Video Object Segmentation

Unsupervised Video Object Segmentation (UVOS) refers to the challenging task of segmenting the prominent object in videos without manual guidance. In other words, the network detects the accurate region of the target object in a sequence of RGB frames without prior knowledge. In recent works, two approaches for UVOS have been discussed that can be divided into: appearance and appearance-motion based methods. Appearance based methods utilize the correlation information of inter-frames to capture target object that commonly appears in a sequence. However, these methods does not consider the motion of target object due to exploit the correlation information between randomly paired frames. Appearance-motion based methods, on the other hand, fuse the appearance features from RGB frames with the motion features from optical flow. Motion cue provides useful information since salient objects typically show distinctive motion in a sequence. However, these approaches have the limitation that the dependency on optical flow is dominant. In this paper, we propose a novel framework for UVOS that can address aforementioned limitations of two approaches in terms of both time and scale. Temporal Alignment Fusion aligns the saliency information of adjacent frames with the target frame to leverage the information of adjacent frames. Scale Alignment Decoder predicts the target object mask precisely by aggregating differently scaled feature maps via continuous mapping with implicit neural representation. We present experimental results on public benchmark datasets, DAVIS 2016 and FBMS, which demonstrate the effectiveness of our method. Furthermore, we outperform the state-of-the-art methods on DAVIS 2016.

Viaarxiv icon

One-Shot Video Inpainting

Feb 28, 2023
Sangjin Lee, Suhwan Cho, Sangyoun Lee

Figure 1 for One-Shot Video Inpainting
Figure 2 for One-Shot Video Inpainting
Figure 3 for One-Shot Video Inpainting
Figure 4 for One-Shot Video Inpainting

Recently, removing objects from videos and filling in the erased regions using deep video inpainting (VI) algorithms has attracted considerable attention. Usually, a video sequence and object segmentation masks for all frames are required as the input for this task. However, in real-world applications, providing segmentation masks for all frames is quite difficult and inefficient. Therefore, we deal with VI in a one-shot manner, which only takes the initial frame's object mask as its input. Although we can achieve that using naive combinations of video object segmentation (VOS) and VI methods, they are sub-optimal and generally cause critical errors. To address that, we propose a unified pipeline for one-shot video inpainting (OSVI). By jointly learning mask prediction and video completion in an end-to-end manner, the results can be optimal for the entire task instead of each separate module. Additionally, unlike the two stage methods that use the predicted masks as ground truth cues, our method is more reliable because the predicted masks can be used as the network's internal guidance. On the synthesized datasets for OSVI, our proposed method outperforms all others both quantitatively and qualitatively.

* AAAI2023 submitted 
Viaarxiv icon

Two-stream Decoder Feature Normality Estimating Network for Industrial Anomaly Detection

Feb 20, 2023
Chaewon Park, Minhyeok Lee, Suhwan Cho, Donghyeong Kim, Sangyoun Lee

Figure 1 for Two-stream Decoder Feature Normality Estimating Network for Industrial Anomaly Detection
Figure 2 for Two-stream Decoder Feature Normality Estimating Network for Industrial Anomaly Detection
Figure 3 for Two-stream Decoder Feature Normality Estimating Network for Industrial Anomaly Detection
Figure 4 for Two-stream Decoder Feature Normality Estimating Network for Industrial Anomaly Detection

Image reconstruction-based anomaly detection has recently been in the spotlight because of the difficulty of constructing anomaly datasets. These approaches work by learning to model normal features without seeing abnormal samples during training and then discriminating anomalies at test time based on the reconstructive errors. However, these models have limitations in reconstructing the abnormal samples due to their indiscriminate conveyance of features. Moreover, these approaches are not explicitly optimized for distinguishable anomalies. To address these problems, we propose a two-stream decoder network (TSDN), designed to learn both normal and abnormal features. Additionally, we propose a feature normality estimator (FNE) to eliminate abnormal features and prevent high-quality reconstruction of abnormal regions. Evaluation on a standard benchmark demonstrated performance better than state-of-the-art models.

* Accepted to IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2023 
Viaarxiv icon

Feature Disentanglement Learning with Switching and Aggregation for Video-based Person Re-Identification

Dec 16, 2022
Minjung Kim, MyeongAh Cho, Sangyoun Lee

Figure 1 for Feature Disentanglement Learning with Switching and Aggregation for Video-based Person Re-Identification
Figure 2 for Feature Disentanglement Learning with Switching and Aggregation for Video-based Person Re-Identification
Figure 3 for Feature Disentanglement Learning with Switching and Aggregation for Video-based Person Re-Identification
Figure 4 for Feature Disentanglement Learning with Switching and Aggregation for Video-based Person Re-Identification

In video person re-identification (Re-ID), the network must consistently extract features of the target person from successive frames. Existing methods tend to focus only on how to use temporal information, which often leads to networks being fooled by similar appearances and same backgrounds. In this paper, we propose a Disentanglement and Switching and Aggregation Network (DSANet), which segregates the features representing identity and features based on camera characteristics, and pays more attention to ID information. We also introduce an auxiliary task that utilizes a new pair of features created through switching and aggregation to increase the network's capability for various camera scenarios. Furthermore, we devise a Target Localization Module (TLM) that extracts robust features against a change in the position of the target according to the frame flow and a Frame Weight Generation (FWG) that reflects temporal information in the final representation. Various loss functions for disentanglement learning are designed so that each component of the network can cooperate while satisfactorily performing its own role. Quantitative and qualitative results from extensive experiments demonstrate the superiority of DSANet over state-of-the-art methods on three benchmark datasets.

* WACV 2023 
Viaarxiv icon

Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition

Dec 09, 2022
Jungho Lee, Minhyeok Lee, Suhwan Cho, Sungmin Woo, Sangyoun Lee

Figure 1 for Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition
Figure 2 for Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition
Figure 3 for Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition
Figure 4 for Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition

Skeleton-based action recognition has attracted considerable attention due to its compact skeletal structure of the human body. Many recent methods have achieved remarkable performance using graph convolutional networks (GCNs) and convolutional neural networks (CNNs), which extract spatial and temporal features, respectively. Although spatial and temporal dependencies in the human skeleton have been explored, spatio-temporal dependency is rarely considered. In this paper, we propose the Inter-Frame Curve Network (IFC-Net) to effectively leverage the spatio-temporal dependency of the human skeleton. Our proposed network consists of two novel elements: 1) The Inter-Frame Curve (IFC) module; and 2) Dilated Graph Convolution (D-GC). The IFC module increases the spatio-temporal receptive field by identifying meaningful node connections between every adjacent frame and generating spatio-temporal curves based on the identified node connections. The D-GC allows the network to have a large spatial receptive field, which specifically focuses on the spatial domain. The kernels of D-GC are computed from the given adjacency matrices of the graph and reflect large receptive field in a way similar to the dilated CNNs. Our IFC-Net combines these two modules and achieves state-of-the-art performance on three skeleton-based action recognition benchmarks: NTU-RGB+D 60, NTU-RGB+D 120, and Northwestern-UCLA.

* 12 pages, 5 figures 
Viaarxiv icon

Occluded Person Re-Identification via Relational Adaptive Feature Correction Learning

Dec 09, 2022
Minjung Kim, MyeongAh Cho, Heansung Lee, Suhwan Cho, Sangyoun Lee

Figure 1 for Occluded Person Re-Identification via Relational Adaptive Feature Correction Learning
Figure 2 for Occluded Person Re-Identification via Relational Adaptive Feature Correction Learning
Figure 3 for Occluded Person Re-Identification via Relational Adaptive Feature Correction Learning
Figure 4 for Occluded Person Re-Identification via Relational Adaptive Feature Correction Learning

Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects, especially in crowded scenes. In addition to the processes performed during holistic person Re-ID, occluded person Re-ID involves the removal of obstacles and the detection of partially visible body parts. Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error. To address these issues, we propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks. In addition, we present a simple concept of a center feature in order to provide an intuitive solution to pedestrian occlusion scenarios. Furthermore, we suggest the idea of Separation Loss (SL) for focusing on different parts between global features and part features. We conduct extensive experiments on five challenging benchmark datasets for occluded and holistic Re-ID tasks to demonstrate that our method achieves superior performance to state-of-the-art methods especially on occluded scene.

* ICASSP 2022 
Viaarxiv icon

DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors

Dec 02, 2022
Dogyoon Lee, Minhyeok Lee, Chajin Shin, Sangyoun Lee

Figure 1 for DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors
Figure 2 for DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors
Figure 3 for DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors
Figure 4 for DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors

Neural Radiance Field(NeRF) has exhibited outstanding three-dimensional(3D) reconstruction quality via the novel view synthesis from multi-view images and paired calibrated camera parameters. However, previous NeRF-based systems have been demonstrated under strictly controlled settings, with little attention paid to less ideal scenarios, including with the presence of noise such as exposure, illumination changes, and blur. In particular, though blur frequently occurs in real situations, NeRF that can handle blurred images has received little attention. The few studies that have investigated NeRF for blurred images have not considered geometric and appearance consistency in 3D space, which is one of the most important factors in 3D reconstruction. This leads to inconsistency and the degradation of the perceptual quality of the constructed scene. Hence, this paper proposes a DP-NeRF, a novel clean NeRF framework for blurred images, which is constrained with two physical priors. These priors are derived from the actual blurring process during image acquisition by the camera. DP-NeRF proposes rigid blurring kernel to impose 3D consistency utilizing the physical priors and adaptive weight proposal to refine the color composition error in consideration of the relationship between depth and blur. We present extensive experimental results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur. The results demonstrate that DP-NeRF successfully improves the perceptual quality of the constructed NeRF ensuring 3D geometric and appearance consistency. We further demonstrate the effectiveness of our model with comprehensive ablation analysis.

Viaarxiv icon

Global-Local Aggregation with Deformable Point Sampling for Camouflaged Object Detection

Nov 22, 2022
Minhyeok Lee, Suhwan Cho, Chaewon Park, Dogyoon Lee, Jungho Lee, Sangyoun Lee

Figure 1 for Global-Local Aggregation with Deformable Point Sampling for Camouflaged Object Detection
Figure 2 for Global-Local Aggregation with Deformable Point Sampling for Camouflaged Object Detection
Figure 3 for Global-Local Aggregation with Deformable Point Sampling for Camouflaged Object Detection
Figure 4 for Global-Local Aggregation with Deformable Point Sampling for Camouflaged Object Detection

The camouflaged object detection (COD) task aims to find and segment objects that have a color or texture that is very similar to that of the background. Despite the difficulties of the task, COD is attracting attention in medical, lifesaving, and anti-military fields. To overcome the difficulties of COD, we propose a novel global-local aggregation architecture with a deformable point sampling method. Further, we propose a global-local aggregation transformer that integrates an object's global information, background, and boundary local information, which is important in COD tasks. The proposed transformer obtains global information from feature channels and effectively extracts important local information from the subdivided patch using the deformable point sampling method. Accordingly, the model effectively integrates global and local information for camouflaged objects and also shows that important boundary information in COD can be efficiently utilized. Our method is evaluated on three popular datasets and achieves state-of-the-art performance. We prove the effectiveness of the proposed method through comparative experiments.

Viaarxiv icon