When focusing an image, depth of field, aperture and distance from the camera to the object, must be taking into account, both, in visible and in infrared spectrum. Our experiments reveal that in addition, the focusing problem in thermal spectrum is also hardly dependent of the temperature of the object itself (and/or the scene).
This work defines a procedure for collecting naturally induced emotional facial expressions through the vision of movie excerpts with high emotional contents and reports experimental data ascertaining the effects of emotions on memory word recognition tasks. The induced emotional states include the four basic emotions of sadness, disgust, happiness, and surprise, as well as the neutral emotional state. The resulting database contains both thermal and visible emotional facial expressions, portrayed by forty Italian subjects and simultaneously acquired by appropriately synchronizing a thermal and a standard visible camera. Each subject's recording session lasted 45 minutes, allowing for each mode (thermal or visible) to collect a minimum of 2000 facial expressions from which a minimum of 400 were selected as highly expressive of each emotion category. The database is available to the scientific community and can be obtained contacting one of the authors. For this pilot study, it was found that emotions and/or emotion categories do not affect individual performance on memory word recognition tasks and temperature changes in the face or in some regions of it do not discriminate among emotional states.
Face segmentation is a first step for face biometric systems. In this paper we present a face segmentation algorithm for thermographic images. This algorithm is compared with the classic Viola and Jones algorithm used for visible images. Experimental results reveal that, when segmenting a multispectral (visible and thermal) face database, the proposed algorithm is more than 10 times faster, while the accuracy of face segmentation in thermal images is higher than in case of Viola-Jones
This study aimed to assess the reliability and validity of the Polar V800 to measure vertical jump height. Twenty-two physically active healthy men (age: 22.89 +- 4.23 years; body mass: 70.74 +- 8.04 kg; height: 1.74 +- 0.76 m) were recruited for the study. The reliability was evaluated by comparing measurements acquired by the Polar V800 in two identical testing sessions one week apart. Validity was assessed by comparing measurements simultaneously obtained using a force platform (gold standard), high-speed camera and the Polar V800 during squat jump (SJ) and countermovement jump (CMJ) tests. In the test-retest reliability, high intraclass correlation coefficients (ICCs) were observed (mean: 0.90, SJ and CMJ) in the Polar V800. There was no significant systematic bias +- random errors (p > 0.05) between test-retest. Low coefficients of variation (<5%) were detected in both jumps in the Polar V800. In the validity assessment, similar jump height was detected among devices (p > 0.05). There was almost perfect agreement between the Polar V800 compared to a force platform for the SJ and CMJ tests (Mean ICCs = 0.95; no systematic bias +- random errors in SJ mean: -0.38 +- 2.10 cm, p > 0.05). Mean ICC between the Polar V800 versus high-speed camera was 0.91 for the SJ and CMJ tests, however, a significant systematic bias +- random error (0.97 +- 2.60 cm; p = 0.01) was detected in CMJ test. The Polar V800 offers valid, compared to force platform, and reliable information about vertical jump height performance in physically active healthy young men.
Practical determination of physical recovery after intense exercise is a challenging topic that must include mechanical aspects as well as cognitive ones because most of physical sport activities, as well as professional activities (including brain computer interface-operated systems), require good shape in both of them. This paper presents a new online handwritten database of 20 healthy subjects. The main goal was to study the influence of several physical exercise stimuli in different handwritten tasks and to evaluate the recovery after strenuous exercise. To this aim, they performed different handwritten tasks before and after physical exercise as well as other measurements such as metabolic and mechanical fatigue assessment. Experimental results showed that although a fast mechanical recovery happens and can be measured by lactate concentrations and mechanical fatigue, this is not the case when cognitive effort is required. Handwriting analysis revealed that statistical differences exist on handwriting performance even after lactate concentration and mechanical assessment recovery. Conclusions: This points out a necessity of more recovering time in sport and professional activities than those measured in classic ways.
In this paper, we present a pressure characterization and normalization procedure for online handwritten acquisition. Normalization process has been tested in biometric recognition experiments (identification and verification) using online signature database MCYT, which consists of the signatures from 330 users. The goal is to analyze the real mismatch scenarios where users are enrolled with one stylus and then, later on, they produce some testing samples using a different stylus model with different pressure response. Experimental results show: 1) a saturation behavior in pressure signal 2) different dynamic ranges in the different stylus studied 3) improved biometric recognition accuracy by means of pressure signal normalization as well as a performance degradation in mismatched conditions 4) interoperability between different stylus can be obtained by means of pressure normalization. Normalization produces an improvement in signature identification rates higher than 7% (absolute value) when compared with mismatched scenarios.
We compare a wide band sub-band speech coder using ADPCM schemes with linear prediction against the same scheme with nonlinear prediction based on multi-layer perceptrons. Exhaustive results are presented in each band, and the full signal. Our proposed scheme with non-linear neural net prediction outperforms the linear scheme up to 2 dB in SEGSNR. In addition, we propose a simple method based on a non-linearity in order to obtain a synthetic wide band signal from a narrow band signal.
This paper presents a new algorithm for speaker recognition based on the combination between the classical Vector Quantization (VQ) and Covariance Matrix (CM) methods. The combined VQ-CM method improves the identification rates of each method alone, with comparable computational burden. It offers a straightforward procedure to obtain a model similar to GMM with full covariance matrices. Experimental results also show that it is more robust against noise than VQ or CM alone.
This paper proposes a multi-section vector quantization approach for on-line signature recognition. We have used the MCYT database, which consists of 330 users and 25 skilled forgeries per person performed by 5 different impostors. This database is larger than those typically used in the literature. Nevertheless, we also provide results from the SVC database. Our proposed system outperforms the winner of SVC with a reduced computational requirement, which is around 47 times lower than DTW. In addition, our system improves the database storage requirements due to vector compression, and is more privacy-friendly as it is not possible to recover the original signature using the codebooks. Experimental results with MCYT provide a 99.76% identification rate and 2.46% EER (skilled forgeries and individual threshold). Experimental results with SVC are 100% of identification rate and 0% (individual threshold) and 0.31% (general threshold) when using a two-section VQ approach.
This paper improves the speaker recognition rates of a MLP classifier and LPCC codebook alone, using a linear combination between both methods. In simulations we have obtained an improvement of 4.7% over a LPCC codebook of 32 vectors and 1.5% for a codebook of 128 vectors (error rate drops from 3.68% to 2.1%). Also we propose an efficient algorithm that reduces the computational complexity of the LPCC-VQ system by a factor of 4.