Amongst all medical biometric traits, Photoplethysmograph (PPG) is the easiest to acquire. PPG records the blood volume change with just combination of Light Emitting Diode and Photodiode from any part of the body. With IoT and smart homes' penetration, PPG recording can easily be integrated with other vital wearable devices. PPG represents peculiarity of hemodynamics and cardiovascular system for each individual. This paper presents non-fiducial method for PPG based biometric authentication. Being a physiological signal, PPG signal alters with physical/mental stress and time. For robustness, these variations cannot be ignored. While, most of the previous works focused only on single session, this paper demonstrates extensive performance evaluation of PPG biometrics against single session data, different emotions, physical exercise and time-lapse using Continuous Wavelet Transform (CWT) and Direct Linear Discriminant Analysis (DLDA). When evaluated on different states and datasets, equal error rate (EER) of $0.5\%$-$6\%$ was achieved for $45$-$60$s average training time. Our CWT/DLDA based technique outperformed all other dimensionality reduction techniques and previous work.
In the Internet, ubiquitous presence of redundant, unedited, raw videos has made video summarization an important problem. Traditional methods of video summarization employ a heuristic set of hand-crafted features, which in many cases fail to capture subtle abstraction of a scene. This paper presents a deep learning method that maps the context of a video to the importance of a scene similar to that is perceived by humans. In particular, a convolutional neural network (CNN)-based architecture is proposed to mimic the frame-level shot importance for user-oriented video summarization. The weights and biases of the CNN are trained extensively through off-line processing, so that it can provide the importance of a frame of an unseen video almost instantaneously. Experiments on estimating the shot importance is carried out using the publicly available database TVSum50. It is shown that the performance of the proposed network is substantially better than that of commonly referred feature-based methods for estimating the shot importance in terms of mean absolute error, absolute error variance, and relative F-measure.
Automatic prediction of continuous-level emotional state requires selection of suitable affective features to develop a regression system based on supervised machine learning. This paper investigates the performance of features statistically learned using convolutional neural networks for instantaneously predicting the continuous dimensions of emotional states. Features with minimum redundancy and maximum relevancy are chosen by using the mutual information-based selection process. The performance of frame-by-frame prediction of emotional state using the moderate length features as proposed in this paper is evaluated on spontaneous and naturalistic human-human conversation of RECOLA database. Experimental results show that the proposed model can be used for instantaneous prediction of emotional state with an accuracy higher than traditional audio or video features that are used for affective computation.