Alert button
Picture for Hyeongwoo Kim

Hyeongwoo Kim

Alert button

HQ3DAvatar: High Quality Controllable 3D Head Avatar

Mar 25, 2023
Kartik Teotia, Mallikarjun B R, Xingang Pan, Hyeongwoo Kim, Pablo Garrido, Mohamed Elgharib, Christian Theobalt

Figure 1 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Figure 2 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Figure 3 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Figure 4 for HQ3DAvatar: High Quality Controllable 3D Head Avatar

Multi-view volumetric rendering techniques have recently shown great potential in modeling and synthesizing high-quality head avatars. A common approach to capture full head dynamic performances is to track the underlying geometry using a mesh-based template or 3D cube-based graphics primitives. While these model-based approaches achieve promising results, they often fail to learn complex geometric details such as the mouth interior, hair, and topological changes over time. This paper presents a novel approach to building highly photorealistic digital head avatars. Our method learns a canonical space via an implicit function parameterized by a neural network. It leverages multiresolution hash encoding in the learned feature space, allowing for high-quality, faster training and high-resolution rendering. At test time, our method is driven by a monocular RGB video. Here, an image encoder extracts face-specific features that also condition the learnable canonical space. This encourages deformation-dependent texture variations during training. We also propose a novel optical flow based loss that ensures correspondences in the learned canonical space, thus encouraging artifact-free and temporally consistent renderings. We show results on challenging facial expressions and show free-viewpoint renderings at interactive real-time rates for medium image resolutions. Our method outperforms all existing approaches, both visually and numerically. We will release our multiple-identity dataset to encourage further research. Our Project page is available at: https://vcai.mpi-inf.mpg.de/projects/HQ3DAvatar/

* 16 Pages, 15 Figures. Project page: https://vcai.mpi-inf.mpg.de/projects/HQ3DAvatar/ 
Viaarxiv icon

VideoForensicsHQ: Detecting High-quality Manipulated Face Videos

May 20, 2020
Gereon Fox, Wentao Liu, Hyeongwoo Kim, Hans-Peter Seidel, Mohamed Elgharib, Christian Theobalt

Figure 1 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Figure 2 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Figure 3 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Figure 4 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos

New approaches to synthesize and manipulate face videos at very high quality have paved the way for new applications in computer animation, virtual and augmented reality, or face video analysis. However, there are concerns that they may be used in a malicious way, e.g. to manipulate videos of public figures, politicians or reporters, to spread false information. The research community therefore developed techniques for automated detection of modified imagery, and assembled benchmark datasets showing manipulatons by state-of-the-art techniques. In this paper, we contribute to this initiative in two ways: First, we present a new audio-visual benchmark dataset. It shows some of the highest quality visual manipulations available today. Human observers find them significantly harder to identify as forged than videos from other benchmarks. Furthermore we propose new family of deep-learning-based fake detectors, demonstrating that existing detectors are not well-suited for detecting fakes of a quality as high as presented in our dataset. Our detectors examine spatial and temporal features. This allows them to outperform existing approaches both in terms of high detection accuracy and generalization to unseen fake generation methods and unseen identities.

* 21 pages, 9 figures 
Viaarxiv icon

Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation

Jan 14, 2020
Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

Figure 1 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Figure 2 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Figure 3 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Figure 4 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation

Synthesizing realistic videos of humans using neural networks has been a popular alternative to the conventional graphics-based rendering pipeline due to its high efficiency. Existing works typically formulate this as an image-to-image translation problem in 2D screen space, which leads to artifacts such as over-smoothing, missing body parts, and temporal instability of fine-scale detail, such as pose-dependent wrinkles in the clothing. In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space. More specifically, our method relies on the combination of two convolutional neural networks (CNNs). Given the pose information, the first CNN predicts a dynamic texture map that contains time-coherent high-frequency details, and the second CNN conditions the generation of the final video on the temporally coherent output of the first CNN. We demonstrate several applications of our approach, such as human reenactment and novel view synthesis from monocular video, where we show significant improvement over the state of the art both qualitatively and quantitatively.

Viaarxiv icon

Neural Style-Preserving Visual Dubbing

Sep 06, 2019
Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, Hans-Peter Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt

Figure 1 for Neural Style-Preserving Visual Dubbing
Figure 2 for Neural Style-Preserving Visual Dubbing
Figure 3 for Neural Style-Preserving Visual Dubbing
Figure 4 for Neural Style-Preserving Visual Dubbing

Dubbing is a technique for translating video content from one language to another. However, state-of-the-art visual dubbing techniques directly copy facial expressions from source to target actors without considering identity-specific idiosyncrasies such as a unique type of smile. We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages. At the heart of our approach is the concept of motion style, in particular for facial expressions, i.e., the person-specific expression change that is yet another essential factor beyond visual accuracy in face editing applications. Our method is based on a recurrent generative adversarial network that captures the spatiotemporal co-activation of facial expressions, and enables generating and modifying the facial expressions of the target actor while preserving their style. We train our model with unsynchronized source and target videos in an unsupervised manner using cycle-consistency and mouth expression losses, and synthesize photorealistic video frames using a layered neural face renderer. Our approach generates temporally coherent results, and handles dynamic backgrounds. Our results show that our dubbing approach maintains the idiosyncratic style of the target actor better than previous approaches, even for widely differing source and target actors.

* SIGGRAPH Asia 2019 
Viaarxiv icon

EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

May 26, 2019
Mohamed Elgharib, Mallikarjun BR, Ayush Tewari, Hyeongwoo Kim, Wentao Liu, Hans-Peter Seidel, Christian Theobalt

Figure 1 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Figure 2 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Figure 3 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Figure 4 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time.

* Project Page: http://gvv.mpi-inf.mpg.de/projects/EgoFace/ 
Viaarxiv icon

Neural Animation and Reenactment of Human Actor Videos

Sep 11, 2018
Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

Figure 1 for Neural Animation and Reenactment of Human Actor Videos
Figure 2 for Neural Animation and Reenactment of Human Actor Videos
Figure 3 for Neural Animation and Reenactment of Human Actor Videos
Figure 4 for Neural Animation and Reenactment of Human Actor Videos

We propose a method for generating (near) video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models, and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model, and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked in order to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state-of-the-art in learning-based human image synthesis. Project page: http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/

* Project page: http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/ 
Viaarxiv icon

Deep Video Portraits

May 29, 2018
Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

Figure 1 for Deep Video Portraits
Figure 2 for Deep Video Portraits
Figure 3 for Deep Video Portraits
Figure 4 for Deep Video Portraits

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.

* SIGGRAPH 2018, Video: https://www.youtube.com/watch?v=qc5P2bvfl44 
Viaarxiv icon

InverseFaceNet: Deep Monocular Inverse Face Rendering

May 16, 2018
Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

Figure 1 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Figure 2 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Figure 3 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Figure 4 for InverseFaceNet: Deep Monocular Inverse Face Rendering

We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image. By estimating all parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible in real time. Most previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created training corpus. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. We further propose a self-supervised bootstrapping process in the network training loop, which iteratively updates the synthetic training corpus to better reflect the distribution of real-world imagery. We demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches.

* CVPR 2018 (poster) 10 pages (+5 pages) 
Viaarxiv icon

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

Mar 29, 2018
Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

Figure 1 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Figure 2 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Figure 3 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Figure 4 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

The reconstruction of dense 3D models of face geometry and appearance from a single image is highly challenging and ill-posed. To constrain the problem, many approaches rely on strong priors, such as parametric face models learned from limited 3D scan data. However, prior models restrict generalization of the true diversity in facial geometry, skin reflectance and illumination. To alleviate this problem, we present the first approach that jointly learns 1) a regressor for face shape, expression, reflectance and illumination on the basis of 2) a concurrently learned parametric face model. Our multi-level face model combines the advantage of 3D Morphable Models for regularization with the out-of-space generalization of a learned corrective space. We train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss, both defined at multiple detail levels. Our approach compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.

* CVPR 2018 (Oral). Project webpage: https://gvv.mpi-inf.mpg.de/projects/FML/ 
Viaarxiv icon

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

Dec 07, 2017
Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, Christian Theobalt

Figure 1 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Figure 2 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Figure 3 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Figure 4 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

In this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is our new differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world data feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation.

* International Conference on Computer Vision (ICCV) 2017 (Oral), 13 pages 
Viaarxiv icon