Alert button
Picture for Junhwa Hur

Junhwa Hur

Alert button

Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence

Nov 28, 2023
Junyi Zhang, Charles Herrmann, Junhwa Hur, Eric Chen, Varun Jampani, Deqing Sun, Ming-Hsuan Yang

While pre-trained large-scale vision models have shown significant promise for semantic correspondence, their features often struggle to grasp the geometry and orientation of instances. This paper identifies the importance of being geometry-aware for semantic correspondence and reveals a limitation of the features of current foundation models under simple post-processing. We show that incorporating this information can markedly enhance semantic correspondence performance with simple but effective solutions in both zero-shot and supervised settings. We also construct a new challenging benchmark for semantic correspondence built from an existing animal pose estimation dataset, for both pre-training validating models. Our method achieves a PCK@0.10 score of 64.2 (zero-shot) and 85.6 (supervised) on the challenging SPair-71k dataset, outperforming the state-of-the-art by 4.3p and 11.0p absolute gains, respectively. Our code and datasets will be publicly available.

* Project page: https://telling-left-from-right.github.io/ 
Viaarxiv icon

The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation

Jun 02, 2023
Saurabh Saxena, Charles Herrmann, Junhwa Hur, Abhishek Kar, Mohammad Norouzi, Deqing Sun, David J. Fleet

Figure 1 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Figure 2 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Figure 3 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Figure 4 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation

Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity. We show that they also excel in estimating optical flow and monocular depth, surprisingly, without task-specific architectures and loss functions that are predominant for these tasks. Compared to the point estimates of conventional regression-based methods, diffusion models also enable Monte Carlo inference, e.g., capturing uncertainty and ambiguity in flow and depth. With self-supervised pre-training, the combined use of synthetic and real data for supervised training, and technical innovations (infilling and step-unrolled denoising diffusion training) to handle noisy-incomplete training data, and a simple form of coarse-to-fine refinement, one can train state-of-the-art diffusion models for depth and optical flow estimation. Extensive experiments focus on quantitative performance against benchmarks, ablations, and the model's ability to capture uncertainty and multimodality, and impute missing values. Our model, DDVM (Denoising Diffusion Vision Model), obtains a state-of-the-art relative depth error of 0.074 on the indoor NYU benchmark and an Fl-all outlier rate of 3.26\% on the KITTI optical flow benchmark, about 25\% better than the best published method. For an overview see https://diffusion-vision.github.io.

Viaarxiv icon

A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence

May 24, 2023
Junyi Zhang, Charles Herrmann, Junhwa Hur, Luisa Polania Cabrera, Varun Jampani, Deqing Sun, Ming-Hsuan Yang

Figure 1 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Figure 2 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Figure 3 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Figure 4 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence

Text-to-image diffusion models have made significant advances in generating and editing high-quality images. As a result, numerous approaches have explored the ability of diffusion model features to understand and process single images for downstream tasks, e.g., classification, semantic segmentation, and stylization. However, significantly less is known about what these features reveal across multiple, different images and objects. In this work, we exploit Stable Diffusion (SD) features for semantic and dense correspondence and discover that with simple post-processing, SD features can perform quantitatively similar to SOTA representations. Interestingly, the qualitative analysis reveals that SD features have very different properties compared to existing representation learning features, such as the recently released DINOv2: while DINOv2 provides sparse but accurate matches, SD features provide high-quality spatial information but sometimes inaccurate semantic matches. We demonstrate that a simple fusion of these two features works surprisingly well, and a zero-shot evaluation using nearest neighbors on these fused features provides a significant performance gain over state-of-the-art methods on benchmark datasets, e.g., SPair-71k, PF-Pascal, and TSS. We also show that these correspondences can enable interesting applications such as instance swapping in two images.

* Project page: https://sd-complements-dino.github.io/ 
Viaarxiv icon

Self-supervised AutoFlow

Dec 08, 2022
Hsin-Ping Huang, Charles Herrmann, Junhwa Hur, Erika Lu, Kyle Sargent, Austin Stone, Ming-Hsuan Yang, Deqing Sun

Figure 1 for Self-supervised AutoFlow
Figure 2 for Self-supervised AutoFlow
Figure 3 for Self-supervised AutoFlow
Figure 4 for Self-supervised AutoFlow

Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric. Observing a strong correlation between the ground truth search metric and self-supervised losses, we introduce self-supervised AutoFlow to handle real-world videos without ground truth labels. Using self-supervised loss as the search metric, our self-supervised AutoFlow performs on par with AutoFlow on Sintel and KITTI where ground truth is available, and performs better on the real-world DAVIS dataset. We further explore using self-supervised AutoFlow in the (semi-)supervised setting and obtain competitive results against the state of the art.

Viaarxiv icon

RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer

May 03, 2022
Bayram Bayramli, Junhwa Hur, Hongtao Lu

Figure 1 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Figure 2 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Figure 3 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Figure 4 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer

Learning scene flow from a monocular camera still remains a challenging task due to its ill-posedness as well as lack of annotated data. Self-supervised methods demonstrate learning scene flow estimation from unlabeled data, yet their accuracy lags behind (semi-)supervised methods. In this paper, we introduce a self-supervised monocular scene flow method that substantially improves the accuracy over the previous approaches. Based on RAFT, a state-of-the-art optical flow model, we design a new decoder to iteratively update 3D motion fields and disparity maps simultaneously. Furthermore, we propose an enhanced upsampling layer and a disparity initialization technique, which overall further improves accuracy up to 7.2%. Our method achieves state-of-the-art accuracy among all self-supervised monocular scene flow methods, improving accuracy by 34.2%. Our fine-tuned model outperforms the best previous semi-supervised method with 228 times faster runtime. Code will be publicly available.

Viaarxiv icon

Self-Supervised Multi-Frame Monocular Scene Flow

May 05, 2021
Junhwa Hur, Stefan Roth

Figure 1 for Self-Supervised Multi-Frame Monocular Scene Flow
Figure 2 for Self-Supervised Multi-Frame Monocular Scene Flow
Figure 3 for Self-Supervised Multi-Frame Monocular Scene Flow
Figure 4 for Self-Supervised Multi-Frame Monocular Scene Flow

Estimating 3D scene flow from a sequence of monocular images has been gaining increased attention due to the simple, economical capture setup. Owing to the severe ill-posedness of the problem, the accuracy of current methods has been limited, especially that of efficient, real-time approaches. In this paper, we introduce a multi-frame monocular scene flow network based on self-supervised learning, improving the accuracy over previous networks while retaining real-time efficiency. Based on an advanced two-frame baseline with a split-decoder design, we propose (i) a multi-frame model using a triple frame input and convolutional LSTM connections, (ii) an occlusion-aware census loss for better accuracy, and (iii) a gradient detaching strategy to improve training stability. On the KITTI dataset, we observe state-of-the-art accuracy among monocular scene flow methods based on self-supervised learning.

* To appear at CVPR 2021. Code available: https://github.com/visinf/multi-mono-sf 
Viaarxiv icon

Self-Supervised Monocular Scene Flow Estimation

Apr 15, 2020
Junhwa Hur, Stefan Roth

Figure 1 for Self-Supervised Monocular Scene Flow Estimation
Figure 2 for Self-Supervised Monocular Scene Flow Estimation
Figure 3 for Self-Supervised Monocular Scene Flow Estimation
Figure 4 for Self-Supervised Monocular Scene Flow Estimation

Scene flow estimation has been receiving increasing attention for 3D environment perception. Monocular scene flow estimation -- obtaining 3D structure and 3D motion from two temporally consecutive images -- is a highly ill-posed problem, and practical solutions are lacking to date. We propose a novel monocular scene flow method that yields competitive accuracy and real-time performance. By taking an inverse problem view, we design a single convolutional neural network (CNN) that successfully estimates depth and 3D motion simultaneously from a classical optical flow cost volume. We adopt self-supervised learning with 3D loss functions and occlusion reasoning to leverage unlabeled data. We validate our design choices, including the proxy loss and augmentation setup. Our model achieves state-of-the-art accuracy among unsupervised/self-supervised learning approaches to monocular scene flow, and yields competitive results for the optical flow and monocular depth estimation sub-tasks. Semi-supervised fine-tuning further improves the accuracy and yields promising results in real-time.

* To appear at CVPR 2020 (Oral); a typo corrected in the reference section 
Viaarxiv icon

Optical Flow Estimation in the Deep Learning Age

Apr 06, 2020
Junhwa Hur, Stefan Roth

Figure 1 for Optical Flow Estimation in the Deep Learning Age
Figure 2 for Optical Flow Estimation in the Deep Learning Age
Figure 3 for Optical Flow Estimation in the Deep Learning Age
Figure 4 for Optical Flow Estimation in the Deep Learning Age

Akin to many subareas of computer vision, the recent advances in deep learning have also significantly influenced the literature on optical flow. Previously, the literature had been dominated by classical energy-based models, which formulate optical flow estimation as an energy minimization problem. However, as the practical benefits of Convolutional Neural Networks (CNNs) over conventional methods have become apparent in numerous areas of computer vision and beyond, they have also seen increased adoption in the context of motion estimation to the point where the current state of the art in terms of accuracy is set by CNN approaches. We first review this transition as well as the developments from early work to the current state of CNNs for optical flow estimation. Alongside, we discuss some of their technical details and compare them to recapitulate which technical contribution led to the most significant accuracy improvements. Then we provide an overview of the various optical flow approaches introduced in the deep learning age, including those based on alternative learning paradigms (e.g., unsupervised and semi-supervised methods) as well as the extension to the multi-frame case, which is able to yield further accuracy improvements.

* To appear as a book chapter in Modelling Human Motion, N. Noceti, A. Sciutti and F. Rea, Eds., Springer, 2020 
Viaarxiv icon

Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation

Apr 10, 2019
Junhwa Hur, Stefan Roth

Figure 1 for Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation
Figure 2 for Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation
Figure 3 for Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation
Figure 4 for Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation

Deep learning approaches to optical flow estimation have seen rapid progress over the recent years. One common trait of many networks is that they refine an initial flow estimate either through multiple stages or across the levels of a coarse-to-fine representation. While leading to more accurate results, the downside of this is an increased number of parameters. Taking inspiration from both classical energy minimization approaches as well as residual networks, we propose an iterative residual refinement (IRR) scheme based on weight sharing that can be combined with several backbone networks. It reduces the number of parameters, improves the accuracy, or even achieves both. Moreover, we show that integrating occlusion prediction and bi-directional flow estimation into our IRR scheme can further boost the accuracy. Our full network achieves state-of-the-art results for both optical flow and occlusion estimation across several standard datasets.

* To appear in CVPR 2019 
Viaarxiv icon