We present a platform for student monitoring in remote education consisting of a collection of sensors and software that capture biometric and behavioral data. We define a collection of tasks to acquire behavioral data that can be useful for facing the existing challenges in automatic student monitoring during remote evaluation. Additionally, we release an initial database including data from 20 different users completing these tasks with a set of basic sensors: webcam, microphone, mouse, and keyboard; and also from more advanced sensors: NIR camera, smartwatch, additional RGB cameras, and an EEG band. Information from the computer (e.g. system logs, MAC, IP, or web browsing history) is also stored. During each acquisition session each user completed three different types of tasks generating data of different nature: mouse and keystroke dynamics, face data, and audio data among others. The tasks have been designed with two main goals in mind: i) analyse the capacity of such biometric and behavioral data for detecting anomalies during remote evaluation, and ii) study the capability of these data, i.e. EEG, ECG, or NIR video, for estimating other information about the users such as their attention level, the presence of stress, or their pulse rate.
In this paper we develop a robust for heart rate (HR) estimation method using face video for challenging scenarios with high variability sources such as head movement, illumination changes, vibration, blur, etc. Our method employs a quality measure Q to extract a remote Plethysmography (rPPG) signal as clean as possible from a specific face video segment. Our main motivation is developing robust technology for driver monitoring. Therefore, for our experiments we use a self-collected dataset consisting of Near Infrared (NIR) videos acquired with a camera mounted in the dashboard of a real moving car. We compare the performance of a classic rPPG algorithm, and the performance of the same method, but using Q for selecting which video segments present a lower amount of variability. Our results show that using the video segments with the highest quality in a realistic driving setup improves the HR estimation with a relative accuracy improvement larger than 20%.
In this paper we develop a Quality Assessment approach for face recognition based on deep learning. The method consists of a Convolutional Neural Network, FaceQnet, that is used to predict the suitability of a specific input image for face recognition purposes. The training of FaceQnet is done using the VGGFace2 database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images with quality information related to their ICAO compliance level. The groundtruth quality labels are obtained using FaceNet to generate comparison scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making it capable of returning a numerical quality measure for each input image. Finally, we verify if the FaceQnet scores are suitable to predict the expected performance when employing a specific image for face recognition with a COTS face recognition system. Several conclusions can be drawn from this work, most notably: 1) we managed to employ an existing ICAO compliance framework and a pretrained CNN to automatically label data with quality information, 2) we trained FaceQnet for quality estimation by fine-tuning a pre-trained face recognition network (ResNet-50), and 3) we have shown that the predictions from FaceQnet are highly correlated with the face recognition accuracy of a state-of-the-art commercial system not used during development. FaceQnet is publicly available in GitHub.