Alert button

"facial": models, code, and papers
Alert button

DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning

Sep 07, 2021
Sm Zobaed, Md Fazle Rabby, Md Istiaq Hossain, Ekram Hossain, Sazib Hasan, Asif Karim, Khan Md. Hasib

Figure 1 for DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning
Figure 2 for DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning
Figure 3 for DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning
Figure 4 for DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning
Viaarxiv icon

EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions

Add code
Bookmark button
Alert button
Jan 25, 2020
Joy Egede, Temitayo Olugbade, Chongyang Wang, Siyang Song, Nadia Berthouze, Michel Valstar, Amanda Williams, Hongyin Meng, Min Aung, Nicholas Lane

Figure 1 for EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions
Figure 2 for EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions
Figure 3 for EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions
Figure 4 for EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions
Viaarxiv icon

DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding

Add code
Bookmark button
Alert button
Aug 05, 2017
Dieu Linh Tran, Robert Walecki, Ognjen Rudovic, Stefanos Eleftheriadis, Bjørn Schuller, Maja Pantic

Figure 1 for DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding
Figure 2 for DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding
Figure 3 for DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding
Figure 4 for DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding
Viaarxiv icon

BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis

Add code
Bookmark button
Alert button
Mar 10, 2022
Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou, Elif Bozkurt, Bo Zheng

Figure 1 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Figure 2 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Figure 3 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Figure 4 for BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Viaarxiv icon

High-fidelity GAN Inversion with Padding Space

Add code
Bookmark button
Alert button
Mar 21, 2022
Qingyan Bai, Yinghao Xu, Jiapeng Zhu, Weihao Xia, Yujiu Yang, Yujun Shen

Figure 1 for High-fidelity GAN Inversion with Padding Space
Figure 2 for High-fidelity GAN Inversion with Padding Space
Figure 3 for High-fidelity GAN Inversion with Padding Space
Figure 4 for High-fidelity GAN Inversion with Padding Space
Viaarxiv icon

Deep Temporal Appearance-Geometry Network for Facial Expression Recognition

Mar 05, 2015
Heechul Jung, Sihaeng Lee, Sunjeong Park, Injae Lee, Chunghyun Ahn, Junmo Kim

Figure 1 for Deep Temporal Appearance-Geometry Network for Facial Expression Recognition
Figure 2 for Deep Temporal Appearance-Geometry Network for Facial Expression Recognition
Figure 3 for Deep Temporal Appearance-Geometry Network for Facial Expression Recognition
Figure 4 for Deep Temporal Appearance-Geometry Network for Facial Expression Recognition
Viaarxiv icon

Metaethical Perspectives on 'Benchmarking' AI Ethics

Apr 11, 2022
Travis LaCroix, Alexandra Sasha Luccioni

Viaarxiv icon

Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks

Oct 09, 2021
Yashish M. Siriwardena, Chris Kitchen, Deanna L. Kelly, Carol Espy-Wilson

Figure 1 for Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks
Figure 2 for Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks
Figure 3 for Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks
Figure 4 for Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks
Viaarxiv icon

Facial Expressions Tracking and Recognition: Database Protocols for Systems Validation and Evaluation

Jun 02, 2015
Catarina Runa Miranda, Pedro Mendes, Pedro Coelho, Xenxo Alvarez, João Freitas, Miguel Sales Dias, Verónica Costa Orvalho

Figure 1 for Facial Expressions Tracking and Recognition: Database Protocols for Systems Validation and Evaluation
Figure 2 for Facial Expressions Tracking and Recognition: Database Protocols for Systems Validation and Evaluation
Figure 3 for Facial Expressions Tracking and Recognition: Database Protocols for Systems Validation and Evaluation
Figure 4 for Facial Expressions Tracking and Recognition: Database Protocols for Systems Validation and Evaluation
Viaarxiv icon