Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"facial recognition": models, code, and papers

AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild

Oct 09, 2017
Ali Mollahosseini, Behzad Hasani, Mohammad H. Mahoor

Automated affective computing in the wild setting is a challenging problem in computer vision. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To meet this need, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet). AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.

* IEEE Transactions on Affective Computing, 2017 
  
Access Paper or Ask Questions

Do Deep Neural Networks Forget Facial Action Units? -- Exploring the Effects of Transfer Learning in Health Related Facial Expression Recognition

Apr 15, 2021
Pooja Prajod, Dominik Schiller, Tobias Huber, Elisabeth André

In this paper, we present a process to investigate the effects of transfer learning for automatic facial expression recognition from emotions to pain. To this end, we first train a VGG16 convolutional neural network to automatically discern between eight categorical emotions. We then fine-tune successively larger parts of this network to learn suitable representations for the task of automatic pain recognition. Subsequently, we apply those fine-tuned representations again to the original task of emotion recognition to further investigate the differences in performance between the models. In the second step, we use Layer-wise Relevance Propagation to analyze predictions of the model that have been predicted correctly previously but are now wrongly classified. Based on this analysis, we rely on the visual inspection of a human observer to generate hypotheses about what has been forgotten by the model. Finally, we test those hypotheses quantitatively utilizing concept embedding analysis methods. Our results show that the network, which was fully fine-tuned for pain recognition, indeed payed less attention to two action units that are relevant for expression recognition but not for pain recognition.

* The 5th International Workshop on Health Intelligence (W3PHIAI-21) 
  
Access Paper or Ask Questions

RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion Judgement and Objective AU Annotations

Aug 12, 2020
Wenjing Yan, Shan Li, Chengtao Que, JiQuan Pei, Weihong Deng

Much of the work on automatic facial expression recognition relies on databases containing a certain number of emotion classes and their exaggerated facial configurations (generally six prototypical facial expressions), based on Ekman's Basic Emotion Theory. However, recent studies have revealed that facial expressions in our human life can be blended with multiple basic emotions. And the emotion labels for these in-the-wild facial expressions cannot easily be annotated solely on pre-defined AU patterns. How to analyze the action units for such complex expressions is still an open question. To address this issue, we develop a RAF-AU database that employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild. We first reviewed the annotation methods in existing databases and identified crowdsourcing as a promising strategy for labeling in-the-wild facial expressions. Then, RAF-AU was finely annotated by experienced coders, on which we also conducted a preliminary investigation of which key AUs contribute most to a perceived emotion, and the relationship between AUs and facial expressions. Finally, we provided a baseline for AU recognition in RAF-AU using popular features and multi-label learning methods.

  
Access Paper or Ask Questions

FoggySight: A Scheme for Facial Lookup Privacy

Dec 15, 2020
Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno

Advances in deep learning algorithms have enabled better-than-human performance on face recognition tasks. In parallel, private companies have been scraping social media and other public websites that tie photos to identities and have built up large databases of labeled face images. Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users. In this work, we tackle the problem of providing privacy from such face recognition systems. We propose and evaluate FoggySight, a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media. FoggySight's core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms. We explore different settings for this scheme and find that it does enable protection of facial privacy -- including against a facial recognition service with unknown internals.

  
Access Paper or Ask Questions

FePh: An Annotated Facial Expression Dataset for the RWTH-PHOENIX-Weather 2014 Dataset

Mar 03, 2020
Marie Alaghband, Niloofar Yousefi, Ivan Garibay

Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression dataset in the context of sign language are still scarce resources. In this manuscript, we introduce a continuous sign language facial expression dataset, comprising over $3000$ annotated images of the RWTH-PHOENIX-Weather 2014 development set. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh in the context of facial expression and sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. The dataset will be publicly available.

  
Access Paper or Ask Questions

3D Facial Action Units Recognition for Emotional Expression

Dec 01, 2017
N. Hussain, H. Ujir, I. Hipiny, J-L Minoi

The muscular activities caused the activation of certain AUs for every facial expression at the certain duration of time throughout the facial expression. This paper presents the methods to recognise facial Action Unit (AU) using facial distance of the facial features which activates the muscles. The seven facial action units involved are AU1, AU4, AU6, AU12, AU15, AU17 and AU25 that characterises happy and sad expression. The recognition is performed on each AU according to rules defined based on the distance of each facial points. The facial distances chosen are extracted from twelve facial features. Then the facial distances are trained using Support Vector Machine (SVM) and Neural Network (NN). Classification result using SVM is presented with several different SVM kernels while result using NN is presented for each training, validation and testing phase.

* To be published in Advanced Science Letters Volume 24 (ICCSE2017) 
  
Access Paper or Ask Questions

Detection and Localization of Facial Expression Manipulations

Mar 15, 2021
Ghazal Mazaheri, Amit K. Roy-Chowdhury

Concern regarding the wide-spread use of fraudulent images/videos in social media necessitates precise detection of such fraud. The importance of facial expressions in communication is widely known, and adversarial attacks often focus on manipulating the expression related features. Thus, it is important to develop methods that can detect manipulations in facial expressions, and localize the manipulated regions. To address this problem, we propose a framework that is able to detect manipulations in facial expression using a close combination of facial expression recognition and image manipulation methods. With the addition of feature maps extracted from the facial expression recognition framework, our manipulation detector is able to localize the manipulated region. We show that, on the Face2Face dataset, where there is abundant expression manipulation, our method achieves over 3% higher accuracy for both classification and localization of manipulations compared to state-of-the-art methods. In addition, results on the NeuralTextures dataset where the facial expressions corresponding to the mouth regions have been modified, show 2% higher accuracy in both classification and localization of manipulation. We demonstrate that the method performs at-par with the state-of-the-art methods in cases where the expression is not manipulated, but rather the identity is changed, thus ensuring generalizability of the approach.

  
Access Paper or Ask Questions

Multi-spectral Facial Landmark Detection

Jun 09, 2020
Jin Keong, Xingbo Dong, Zhe Jin, Khawla Mallat, Jean-Luc Dugelay

Thermal face image analysis is favorable for certain circumstances. For example, illumination-sensitive applications, like nighttime surveillance; and privacy-preserving demanded access control. However, the inadequate study on thermal face image analysis calls for attention in responding to the industry requirements. Detecting facial landmark points are important for many face analysis tasks, such as face recognition, 3D face reconstruction, and face expression recognition. In this paper, we propose a robust neural network enabled facial landmark detection, namely Deep Multi-Spectral Learning (DMSL). Briefly, DMSL consists of two sub-models, i.e. face boundary detection, and landmark coordinates detection. Such an architecture demonstrates the capability of detecting the facial landmarks on both visible and thermal images. Particularly, the proposed DMSL model is robust in facial landmark detection where the face is partially occluded, or facing different directions. The experiment conducted on Eurecom's visible and thermal paired database shows the superior performance of DMSL over the state-of-the-art for thermal facial landmark detection. In addition to that, we have annotated a thermal face dataset with their respective facial landmark for the purpose of experimentation.

  
Access Paper or Ask Questions

Implementation of Robust Face Recognition System Using Live Video Feed Based on CNN

Nov 18, 2018
Yang Li, Sangwhan Cha

The way to accurately and effectively identify people has always been an interesting topic in research and industry. With the rapid development of artificial intelligence in recent years, facial recognition gains lots of attention due to prompting the development of emerging identification methods. Compared to traditional card recognition, fingerprint recognition and iris recognition, face recognition has many advantages including non-contact interface, high concurrency, and user-friendly usage. It has high potential to be used in government, public facilities, security, e-commerce, retailing, education and many other fields. With the development of deep learning and the introduction of deep convolutional neural networks, the accuracy and speed of face recognition have made great strides. However, the results from different networks and models are very different with different system architecture. Furthermore, it could take significant amount of data storage space and data processing time for the face recognition system with video feed, if the system stores images and features of human faces. In this paper, facial features are extracted by merging and comparing multiple models, and then a deep neural network is constructed to train and construct the combined features. In this way, the advantages of multiple models can be combined to mention the recognition accuracy. After getting a model with high accuracy, we build a product model. The model will take a human face image and extract it into a vector. Then the distance between vectors are compared to determine if two faces on different picture belongs to the same person. The proposed approach reduces data storage space and data processing time for the face recognition system with video feed scientifically with our proposed system architecture.

  
Access Paper or Ask Questions

An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis

Mar 29, 2022
Xiaotian Li, Xiang Zhang, Huiyuan Yang, Wenna Duan, Weiying Dai, Lijun Yin

Emotion is an experience associated with a particular pattern of physiological activity along with different physiological, behavioral and cognitive changes. One behavioral change is facial expression, which has been studied extensively over the past few decades. Facial behavior varies with a person's emotion according to differences in terms of culture, personality, age, context, and environment. In recent years, physiological activities have been used to study emotional responses. A typical signal is the electroencephalogram (EEG), which measures brain activity. Most of existing EEG-based emotion analysis has overlooked the role of facial expression changes. There exits little research on the relationship between facial behavior and brain signals due to the lack of dataset measuring both EEG and facial action signals simultaneously. To address this problem, we propose to develop a new database by collecting facial expressions, action units, and EEGs simultaneously. We recorded the EEGs and face videos of both posed facial actions and spontaneous expressions from 29 participants with different ages, genders, ethnic backgrounds. Differing from existing approaches, we designed a protocol to capture the EEG signals by evoking participants' individual action units explicitly. We also investigated the relation between the EEG signals and facial action units. As a baseline, the database has been evaluated through the experiments on both posed and spontaneous emotion recognition with images alone, EEG alone, and EEG fused with images, respectively. The database will be released to the research community to advance the state of the art for automatic emotion recognition.

* FG2021(long Oral) 
  
Access Paper or Ask Questions
<<
13
14
15
16
17
18
19
20
21
22
23
24
25
>>