Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"facial recognition": models, code, and papers

Backdooring Convolutional Neural Networks via Targeted Weight Perturbations

Dec 07, 2018
Jacob Dumford, Walter Scheirer

We present a new type of backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs) that has been previously unstudied. In particular, we examine the application of facial recognition. Deep learning techniques are at the top of the game for facial recognition, which means they have now been implemented in many production-level systems. Alarmingly, unlike other commercial technologies such as operating systems and network devices, deep learning-based facial recognition algorithms are not presently designed with security requirements or audited for security vulnerabilities before deployment. Given how young the technology is and how abstract many of the internal workings of these algorithms are, neural network-based facial recognition systems are prime targets for security breaches. As more and more of our personal information begins to be guarded by facial recognition (e.g., the iPhone X), exploring the security vulnerabilities of these systems from a penetration testing standpoint is crucial. Along these lines, we describe a general methodology for backdooring CNNs via targeted weight perturbations. Using a five-layer CNN and ResNet-50 as case studies, we show that an attacker is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.

  

Photorealistic Facial Expression Synthesis by the Conditional Difference Adversarial Autoencoder

Aug 30, 2017
Yuqian Zhou, Bertram Emil Shi

Photorealistic facial expression synthesis from single face image can be widely applied to face recognition, data augmentation for emotion recognition or entertainment. This problem is challenging, in part due to a paucity of labeled facial expression data, making it difficult for algorithms to disambiguate changes due to identity and changes due to expression. In this paper, we propose the conditional difference adversarial autoencoder, CDAAE, for facial expression synthesis. The CDAAE takes a facial image of a previously unseen person and generates an image of that human face with a target emotion or facial action unit label. The CDAAE adds a feedforward path to an autoencoder structure connecting low level features at the encoder to features at the corresponding level at the decoder. It handles the problem of disambiguating changes due to identity and changes due to facial expression by learning to generate the difference between low-level features of images of the same person but with different facial expressions. The CDAAE structure can be used to generate novel expressions by combining and interpolating between facial expressions/action units within the training set. Our experimental results demonstrate that the CDAAE can preserve identity information when generating facial expression for unseen subjects more faithfully than previous approaches. This is especially advantageous when training with small databases.

* Accepted by ACII2017 
  

AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild

Oct 09, 2017
Ali Mollahosseini, Behzad Hasani, Mohammad H. Mahoor

Automated affective computing in the wild setting is a challenging problem in computer vision. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To meet this need, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet). AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.

* IEEE Transactions on Affective Computing, 2017 
  

Do Deep Neural Networks Forget Facial Action Units? -- Exploring the Effects of Transfer Learning in Health Related Facial Expression Recognition

Apr 15, 2021
Pooja Prajod, Dominik Schiller, Tobias Huber, Elisabeth André

In this paper, we present a process to investigate the effects of transfer learning for automatic facial expression recognition from emotions to pain. To this end, we first train a VGG16 convolutional neural network to automatically discern between eight categorical emotions. We then fine-tune successively larger parts of this network to learn suitable representations for the task of automatic pain recognition. Subsequently, we apply those fine-tuned representations again to the original task of emotion recognition to further investigate the differences in performance between the models. In the second step, we use Layer-wise Relevance Propagation to analyze predictions of the model that have been predicted correctly previously but are now wrongly classified. Based on this analysis, we rely on the visual inspection of a human observer to generate hypotheses about what has been forgotten by the model. Finally, we test those hypotheses quantitatively utilizing concept embedding analysis methods. Our results show that the network, which was fully fine-tuned for pain recognition, indeed payed less attention to two action units that are relevant for expression recognition but not for pain recognition.

* The 5th International Workshop on Health Intelligence (W3PHIAI-21) 
  

Eight Years of Face Recognition Research: Reproducibility, Achievements and Open Issues

Aug 08, 2022
Tiago de Freitas Pereira, Dominic Schimdli, Yu Linghu, Xinyi Zhang, Sébastien Marcel, Manuel Günther

Automatic face recognition is a research area with high popularity. Many different face recognition algorithms have been proposed in the last thirty years of intensive research in the field. With the popularity of deep learning and its capability to solve a huge variety of different problems, face recognition researchers have concentrated effort on creating better models under this paradigm. From the year 2015, state-of-the-art face recognition has been rooted in deep learning models. Despite the availability of large-scale and diverse datasets for evaluating the performance of face recognition algorithms, many of the modern datasets just combine different factors that influence face recognition, such as face pose, occlusion, illumination, facial expression and image quality. When algorithms produce errors on these datasets, it is not clear which of the factors has caused this error and, hence, there is no guidance in which direction more research is required. This work is a followup from our previous works developed in 2014 and eventually published in 2016, showing the impact of various facial aspects on face recognition algorithms. By comparing the current state-of-the-art with the best systems from the past, we demonstrate that faces under strong occlusions, some types of illumination, and strong expressions are problems mastered by deep learning algorithms, whereas recognition with low-resolution images, extreme pose variations, and open-set recognition is still an open problem. To show this, we run a sequence of experiments using six different datasets and five different face recognition algorithms in an open-source and reproducible manner. We provide the source code to run all of our experiments, which is easily extensible so that utilizing your own deep network in our evaluation is just a few minutes away.

  

RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion Judgement and Objective AU Annotations

Aug 12, 2020
Wenjing Yan, Shan Li, Chengtao Que, JiQuan Pei, Weihong Deng

Much of the work on automatic facial expression recognition relies on databases containing a certain number of emotion classes and their exaggerated facial configurations (generally six prototypical facial expressions), based on Ekman's Basic Emotion Theory. However, recent studies have revealed that facial expressions in our human life can be blended with multiple basic emotions. And the emotion labels for these in-the-wild facial expressions cannot easily be annotated solely on pre-defined AU patterns. How to analyze the action units for such complex expressions is still an open question. To address this issue, we develop a RAF-AU database that employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild. We first reviewed the annotation methods in existing databases and identified crowdsourcing as a promising strategy for labeling in-the-wild facial expressions. Then, RAF-AU was finely annotated by experienced coders, on which we also conducted a preliminary investigation of which key AUs contribute most to a perceived emotion, and the relationship between AUs and facial expressions. Finally, we provided a baseline for AU recognition in RAF-AU using popular features and multi-label learning methods.

  

FoggySight: A Scheme for Facial Lookup Privacy

Dec 15, 2020
Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno

Advances in deep learning algorithms have enabled better-than-human performance on face recognition tasks. In parallel, private companies have been scraping social media and other public websites that tie photos to identities and have built up large databases of labeled face images. Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users. In this work, we tackle the problem of providing privacy from such face recognition systems. We propose and evaluate FoggySight, a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media. FoggySight's core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms. We explore different settings for this scheme and find that it does enable protection of facial privacy -- including against a facial recognition service with unknown internals.

  

FePh: An Annotated Facial Expression Dataset for the RWTH-PHOENIX-Weather 2014 Dataset

Mar 03, 2020
Marie Alaghband, Niloofar Yousefi, Ivan Garibay

Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression dataset in the context of sign language are still scarce resources. In this manuscript, we introduce a continuous sign language facial expression dataset, comprising over $3000$ annotated images of the RWTH-PHOENIX-Weather 2014 development set. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh in the context of facial expression and sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. The dataset will be publicly available.

  

3D Facial Action Units Recognition for Emotional Expression

Dec 01, 2017
N. Hussain, H. Ujir, I. Hipiny, J-L Minoi

The muscular activities caused the activation of certain AUs for every facial expression at the certain duration of time throughout the facial expression. This paper presents the methods to recognise facial Action Unit (AU) using facial distance of the facial features which activates the muscles. The seven facial action units involved are AU1, AU4, AU6, AU12, AU15, AU17 and AU25 that characterises happy and sad expression. The recognition is performed on each AU according to rules defined based on the distance of each facial points. The facial distances chosen are extracted from twelve facial features. Then the facial distances are trained using Support Vector Machine (SVM) and Neural Network (NN). Classification result using SVM is presented with several different SVM kernels while result using NN is presented for each training, validation and testing phase.

* To be published in Advanced Science Letters Volume 24 (ICCSE2017) 
  

Detection and Localization of Facial Expression Manipulations

Mar 15, 2021
Ghazal Mazaheri, Amit K. Roy-Chowdhury

Concern regarding the wide-spread use of fraudulent images/videos in social media necessitates precise detection of such fraud. The importance of facial expressions in communication is widely known, and adversarial attacks often focus on manipulating the expression related features. Thus, it is important to develop methods that can detect manipulations in facial expressions, and localize the manipulated regions. To address this problem, we propose a framework that is able to detect manipulations in facial expression using a close combination of facial expression recognition and image manipulation methods. With the addition of feature maps extracted from the facial expression recognition framework, our manipulation detector is able to localize the manipulated region. We show that, on the Face2Face dataset, where there is abundant expression manipulation, our method achieves over 3% higher accuracy for both classification and localization of manipulations compared to state-of-the-art methods. In addition, results on the NeuralTextures dataset where the facial expressions corresponding to the mouth regions have been modified, show 2% higher accuracy in both classification and localization of manipulation. We demonstrate that the method performs at-par with the state-of-the-art methods in cases where the expression is not manipulated, but rather the identity is changed, thus ensuring generalizability of the approach.

  
<<
13
14
15
16
17
18
19
20
21
22
23
24
25
>>