Alert button

"facial": models, code, and papers
Alert button

Refacing: reconstructing anonymized facial features using GANs

Add code
Bookmark button
Alert button
Oct 15, 2018
David Abramian, Anders Eklund

Figure 1 for Refacing: reconstructing anonymized facial features using GANs
Figure 2 for Refacing: reconstructing anonymized facial features using GANs
Figure 3 for Refacing: reconstructing anonymized facial features using GANs
Figure 4 for Refacing: reconstructing anonymized facial features using GANs
Viaarxiv icon

Identity Preserving Loss for Learned Image Compression

Apr 27, 2022
Jiuhong Xiao, Lavisha Aggarwal, Prithviraj Banerjee, Manoj Aggarwal, Gerard Medioni

Figure 1 for Identity Preserving Loss for Learned Image Compression
Figure 2 for Identity Preserving Loss for Learned Image Compression
Figure 3 for Identity Preserving Loss for Learned Image Compression
Figure 4 for Identity Preserving Loss for Learned Image Compression
Viaarxiv icon

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Add code
Bookmark button
Alert button
Sep 15, 2017
Matan Sela, Elad Richardson, Ron Kimmel

Figure 1 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Figure 2 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Figure 3 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Figure 4 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Viaarxiv icon

Audio-Visual Evaluation of Oratory Skills

Sep 30, 2021
Tzvi Michelson, Shmuel Peleg

Figure 1 for Audio-Visual Evaluation of Oratory Skills
Figure 2 for Audio-Visual Evaluation of Oratory Skills
Viaarxiv icon

An Exploration of Active Learning for Affective Digital Phenotyping

Apr 06, 2022
Peter Washington, Cezmi Mutlu, Aaron Kline, Cathy Hou, Kaitlyn Dunlap, Jack Kent, Arman Husic, Nate Stockham, Brianna Chrisman, Kelley Paskov, Jae-Yoon Jung, Dennis P. Wall

Figure 1 for An Exploration of Active Learning for Affective Digital Phenotyping
Figure 2 for An Exploration of Active Learning for Affective Digital Phenotyping
Figure 3 for An Exploration of Active Learning for Affective Digital Phenotyping
Figure 4 for An Exploration of Active Learning for Affective Digital Phenotyping
Viaarxiv icon

ResMoNet: A Residual Mobile-based Network for Facial Emotion Recognition in Resource-Limited Systems

May 15, 2020
Rodolfo Ferro-Pérez, Hugo Mitre-Hernandez

Figure 1 for ResMoNet: A Residual Mobile-based Network for Facial Emotion Recognition in Resource-Limited Systems
Figure 2 for ResMoNet: A Residual Mobile-based Network for Facial Emotion Recognition in Resource-Limited Systems
Figure 3 for ResMoNet: A Residual Mobile-based Network for Facial Emotion Recognition in Resource-Limited Systems
Figure 4 for ResMoNet: A Residual Mobile-based Network for Facial Emotion Recognition in Resource-Limited Systems
Viaarxiv icon

Facial Landmark Detection for Manga Images

Add code
Bookmark button
Alert button
Nov 08, 2018
Marco Stricker, Olivier Augereau, Koichi Kise, Motoi Iwata

Figure 1 for Facial Landmark Detection for Manga Images
Figure 2 for Facial Landmark Detection for Manga Images
Figure 3 for Facial Landmark Detection for Manga Images
Figure 4 for Facial Landmark Detection for Manga Images
Viaarxiv icon

Human Head Pose Estimation by Facial Features Location

Oct 09, 2015
Eugene Borovikov

Viaarxiv icon

Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition

Nov 22, 2017
Shizhong Han, Zibo Meng, Zhiyuan Li, James O'Reilly, Jie Cai, Xiaofeng Wang, Yan Tong

Figure 1 for Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition
Figure 2 for Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition
Figure 3 for Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition
Figure 4 for Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition
Viaarxiv icon

Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation

Add code
Bookmark button
Alert button
May 07, 2021
Lincheng Li, Suzhen Wang, Zhimeng Zhang, Yu Ding, Yixing Zheng, Xin Yu, Changjie Fan

Figure 1 for Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation
Figure 2 for Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation
Figure 3 for Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation
Figure 4 for Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation
Viaarxiv icon