Alert button
Picture for Hyeongwoo Kim

Hyeongwoo Kim

Alert button

HQ3DAvatar: High Quality Controllable 3D Head Avatar

Add code
Bookmark button
Alert button
Mar 25, 2023
Kartik Teotia, Mallikarjun B R, Xingang Pan, Hyeongwoo Kim, Pablo Garrido, Mohamed Elgharib, Christian Theobalt

Figure 1 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Figure 2 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Figure 3 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Figure 4 for HQ3DAvatar: High Quality Controllable 3D Head Avatar
Viaarxiv icon

VideoForensicsHQ: Detecting High-quality Manipulated Face Videos

Add code
Bookmark button
Alert button
May 20, 2020
Gereon Fox, Wentao Liu, Hyeongwoo Kim, Hans-Peter Seidel, Mohamed Elgharib, Christian Theobalt

Figure 1 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Figure 2 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Figure 3 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Figure 4 for VideoForensicsHQ: Detecting High-quality Manipulated Face Videos
Viaarxiv icon

Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation

Add code
Bookmark button
Alert button
Jan 14, 2020
Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

Figure 1 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Figure 2 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Figure 3 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Figure 4 for Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation
Viaarxiv icon

Neural Style-Preserving Visual Dubbing

Add code
Bookmark button
Alert button
Sep 06, 2019
Hyeongwoo Kim, Mohamed Elgharib, Michael Zollhöfer, Hans-Peter Seidel, Thabo Beeler, Christian Richardt, Christian Theobalt

Figure 1 for Neural Style-Preserving Visual Dubbing
Figure 2 for Neural Style-Preserving Visual Dubbing
Figure 3 for Neural Style-Preserving Visual Dubbing
Figure 4 for Neural Style-Preserving Visual Dubbing
Viaarxiv icon

EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

Add code
Bookmark button
Alert button
May 26, 2019
Mohamed Elgharib, Mallikarjun BR, Ayush Tewari, Hyeongwoo Kim, Wentao Liu, Hans-Peter Seidel, Christian Theobalt

Figure 1 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Figure 2 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Figure 3 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Figure 4 for EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment
Viaarxiv icon

Neural Animation and Reenactment of Human Actor Videos

Add code
Bookmark button
Alert button
Sep 11, 2018
Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

Figure 1 for Neural Animation and Reenactment of Human Actor Videos
Figure 2 for Neural Animation and Reenactment of Human Actor Videos
Figure 3 for Neural Animation and Reenactment of Human Actor Videos
Figure 4 for Neural Animation and Reenactment of Human Actor Videos
Viaarxiv icon

Deep Video Portraits

Add code
Bookmark button
Alert button
May 29, 2018
Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

Figure 1 for Deep Video Portraits
Figure 2 for Deep Video Portraits
Figure 3 for Deep Video Portraits
Figure 4 for Deep Video Portraits
Viaarxiv icon

InverseFaceNet: Deep Monocular Inverse Face Rendering

Add code
Bookmark button
Alert button
May 16, 2018
Hyeongwoo Kim, Michael Zollhöfer, Ayush Tewari, Justus Thies, Christian Richardt, Christian Theobalt

Figure 1 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Figure 2 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Figure 3 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Figure 4 for InverseFaceNet: Deep Monocular Inverse Face Rendering
Viaarxiv icon

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz

Add code
Bookmark button
Alert button
Mar 29, 2018
Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

Figure 1 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Figure 2 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Figure 3 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Figure 4 for Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
Viaarxiv icon

MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

Add code
Bookmark button
Alert button
Dec 07, 2017
Ayush Tewari, Michael Zollhöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Pérez, Christian Theobalt

Figure 1 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Figure 2 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Figure 3 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Figure 4 for MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction
Viaarxiv icon