Most gender classifications methods from NIR images have used iris information. Recent work has explored the use of the whole periocular iris region which has surprisingly achieve better results. This suggests the most relevant information for gender classification is not located in the iris as expected. In this work, we analyze and demonstrate the location of the most relevant features that describe gender in periocular NIR images and evaluate its influence its classification. Experiments show that the periocular region contains more gender information than the iris region. We extracted several features (intensity, texture, and shape) and classified them according to its relevance using the XgBoost algorithm. Support Vector Machine and nine ensemble classifiers were used for testing gender accuracy when using the most relevant features. The best classification results were obtained when 4,000 features located on the periocular region were used (89.22\%). Additional experiments with the full periocular iris images versus the iris-Occluded images were performed. The gender classification rates obtained were 84.35\% and 85.75\% respectively. We also contribute to the state of the art with a new database (UNAB-Gender). From results, we suggest focussing only on the surrounding area of the iris. This allows us to realize a faster classification of gender from NIR periocular images.
Selfie soft biometrics has great potential for various applications ranging from marketing, security and online banking. However, it faces many challenges since there is limited control in data acquisition conditions. This chapter presents a Super-Resolution-Convolutional Neural Networks (SRCNNs) approach that increases the resolution of low quality periocular iris images cropped from selfie images of subject's faces. This work shows that increasing image resolution (2x and 3x) can improve the sex-classification rate when using a Random Forest classifier. The best sex-classification rate was 90.15% for the right and 87.15% for the left eye. This was achieved when images were upscaled from 150x150 to 450x450 pixels. These results compare well with the state of the art and show that when improving image resolution with the SRCNN the sex-classification rate increases. Additionally, a novel selfie database captured from 150 subjects with an iPhone X was created (available upon request).