Picture for Junhwa Hur

Junhwa Hur

Lumiere: A Space-Time Diffusion Model for Video Generation

Add code
Feb 05, 2024
Figure 1 for Lumiere: A Space-Time Diffusion Model for Video Generation
Figure 2 for Lumiere: A Space-Time Diffusion Model for Video Generation
Figure 3 for Lumiere: A Space-Time Diffusion Model for Video Generation
Figure 4 for Lumiere: A Space-Time Diffusion Model for Video Generation
Viaarxiv icon

Boundary Attention: Learning to Find Faint Boundaries at Any Resolution

Add code
Jan 01, 2024
Figure 1 for Boundary Attention: Learning to Find Faint Boundaries at Any Resolution
Figure 2 for Boundary Attention: Learning to Find Faint Boundaries at Any Resolution
Figure 3 for Boundary Attention: Learning to Find Faint Boundaries at Any Resolution
Figure 4 for Boundary Attention: Learning to Find Faint Boundaries at Any Resolution
Viaarxiv icon

Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model

Add code
Dec 20, 2023
Viaarxiv icon

WonderJourney: Going from Anywhere to Everywhere

Add code
Dec 06, 2023
Figure 1 for WonderJourney: Going from Anywhere to Everywhere
Figure 2 for WonderJourney: Going from Anywhere to Everywhere
Figure 3 for WonderJourney: Going from Anywhere to Everywhere
Figure 4 for WonderJourney: Going from Anywhere to Everywhere
Viaarxiv icon

Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence

Add code
Nov 28, 2023
Viaarxiv icon

The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation

Add code
Jun 02, 2023
Figure 1 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Figure 2 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Figure 3 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Figure 4 for The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Viaarxiv icon

A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence

Add code
May 24, 2023
Figure 1 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Figure 2 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Figure 3 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Figure 4 for A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Viaarxiv icon

Self-supervised AutoFlow

Add code
Dec 08, 2022
Figure 1 for Self-supervised AutoFlow
Figure 2 for Self-supervised AutoFlow
Figure 3 for Self-supervised AutoFlow
Figure 4 for Self-supervised AutoFlow
Viaarxiv icon

RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer

Add code
May 03, 2022
Figure 1 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Figure 2 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Figure 3 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Figure 4 for RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer
Viaarxiv icon

Self-Supervised Multi-Frame Monocular Scene Flow

Add code
May 05, 2021
Figure 1 for Self-Supervised Multi-Frame Monocular Scene Flow
Figure 2 for Self-Supervised Multi-Frame Monocular Scene Flow
Figure 3 for Self-Supervised Multi-Frame Monocular Scene Flow
Figure 4 for Self-Supervised Multi-Frame Monocular Scene Flow
Viaarxiv icon