Alert button
Picture for Pedro C. Neto

Pedro C. Neto

Alert button

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Aug 23, 2023
Pedro C. Neto, Eduarda Caldeira, Jaime S. Cardoso, Ana F. Sequeira

Figure 1 for Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition
Figure 2 for Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition
Figure 3 for Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

With the ever-growing complexity of deep learning models for face recognition, it becomes hard to deploy these systems in real life. Researchers have two options: 1) use smaller models; 2) compress their current models. Since the usage of smaller models might lead to concerning biases, compression gains relevance. However, compressing might be also responsible for an increase in the bias of the final model. We investigate the overall performance, the performance on each ethnicity subgroup and the racial bias of a State-of-the-Art quantization approach when used with synthetic and real data. This analysis provides a few more details on potential benefits of performing quantization with synthetic data, for instance, the reduction of biases on the majority of test scenarios. We tested five distinct architectures and three different training datasets. The models were evaluated on a fourth dataset which was collected to infer and compare the performance of face recognition models on different ethnicity.

* Accepted for Oral at BIOSIG 2023 
Viaarxiv icon

Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection

Jun 05, 2023
Eduarda Caldeira, Pedro C. Neto, Tiago Gonçalves, Naser Damer, Ana F. Sequeira, Jaime S. Cardoso

Figure 1 for Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection
Figure 2 for Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection

Morphing attacks keep threatening biometric systems, especially face recognition systems. Over time they have become simpler to perform and more realistic, as such, the usage of deep learning systems to detect these attacks has grown. At the same time, there is a constant concern regarding the lack of interpretability of deep learning models. Balancing performance and interpretability has been a difficult task for scientists. However, by leveraging domain information and proving some constraints, we have been able to develop IDistill, an interpretable method with state-of-the-art performance that provides information on both the identity separation on morph samples and their contribution to the final prediction. The domain information is learnt by an autoencoder and distilled to a classifier system in order to teach it to separate identity information. When compared to other methods in the literature it outperforms them in three out of five databases and is competitive in the remaining.

* Accepted at EUSIPCO 2023 
Viaarxiv icon

A CAD System for Colorectal Cancer from WSI: A Clinically Validated Interpretable ML-based Prototype

Jan 06, 2023
Pedro C. Neto, Diana Montezuma, Sara P. Oliveira, Domingos Oliveira, João Fraga, Ana Monteiro, João Monteiro, Liliana Ribeiro, Sofia Gonçalves, Stefan Reinhard, Inti Zlobec, Isabel M. Pinto, Jaime S. Cardoso

Figure 1 for A CAD System for Colorectal Cancer from WSI: A Clinically Validated Interpretable ML-based Prototype
Figure 2 for A CAD System for Colorectal Cancer from WSI: A Clinically Validated Interpretable ML-based Prototype
Figure 3 for A CAD System for Colorectal Cancer from WSI: A Clinically Validated Interpretable ML-based Prototype
Figure 4 for A CAD System for Colorectal Cancer from WSI: A Clinically Validated Interpretable ML-based Prototype

The integration of Artificial Intelligence (AI) and Digital Pathology has been increasing over the past years. Nowadays, applications of deep learning (DL) methods to diagnose cancer from whole-slide images (WSI) are, more than ever, a reality within different research groups. Nonetheless, the development of these systems was limited by a myriad of constraints regarding the lack of training samples, the scaling difficulties, the opaqueness of DL methods, and, more importantly, the lack of clinical validation. As such, we propose a system designed specifically for the diagnosis of colorectal samples. The construction of such a system consisted of four stages: (1) a careful data collection and annotation process, which resulted in one of the largest WSI colorectal samples datasets; (2) the design of an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists through spatial annotations; (3) the development of an effective sampling approach based on the expected severeness of each tile, which decreased the computation cost by a factor of almost 6x; (4) the creation of a prototype that integrates the full set of features of the model to be evaluated in clinical practice. During these stages, the proposed method was evaluated in four separate test sets, two of them are external and completely independent. On the largest of those sets, the proposed approach achieved an accuracy of 93.44%. DL for colorectal samples is a few steps closer to stop being research exclusive and to become fully integrated in clinical practice.

* Under Review 
Viaarxiv icon

PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition

Nov 22, 2022
Pedro C. Neto, Ana F. Sequeira, Jaime S. Cardoso, Philipp Terhörst

Figure 1 for PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition
Figure 2 for PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition
Figure 3 for PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition
Figure 4 for PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric (Face) Recognition

In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available.

Viaarxiv icon

OrthoMAD: Morphing Attack Detection Through Orthogonal Identity Disentanglement

Aug 23, 2022
Pedro C. Neto, Tiago Gonçalves, Marco Huber, Naser Damer, Ana F. Sequeira, Jaime S. Cardoso

Figure 1 for OrthoMAD: Morphing Attack Detection Through Orthogonal Identity Disentanglement

Morphing attacks are one of the many threats that are constantly affecting deep face recognition systems. It consists of selecting two faces from different individuals and fusing them into a final image that contains the identity information of both. In this work, we propose a novel regularisation term that takes into account the existent identity information in both and promotes the creation of two orthogonal latent vectors. We evaluate our proposed method (OrthoMAD) in five different types of morphing in the FRLL dataset and evaluate the performance of our model when trained on five distinct datasets. With a small ResNet-18 as the backbone, we achieve state-of-the-art results in the majority of the experiments, and competitive results in the others. The code of this paper will be publicly available.

* Accepted at BIOSIG 2022 
Viaarxiv icon

Explainable Biometrics in the Age of Deep Learning

Aug 19, 2022
Pedro C. Neto, Tiago Gonçalves, João Ribeiro Pinto, Wilson Silva, Ana F. Sequeira, Arun Ross, Jaime S. Cardoso

Figure 1 for Explainable Biometrics in the Age of Deep Learning
Figure 2 for Explainable Biometrics in the Age of Deep Learning
Figure 3 for Explainable Biometrics in the Age of Deep Learning
Figure 4 for Explainable Biometrics in the Age of Deep Learning

Systems capable of analyzing and quantifying human physical or behavioral traits, known as biometrics systems, are growing in use and application variability. Since its evolution from handcrafted features and traditional machine learning to deep learning and automatic feature extraction, the performance of biometric systems increased to outstanding values. Nonetheless, the cost of this fast progression is still not understood. Due to its opacity, deep neural networks are difficult to understand and analyze, hence, hidden capacities or decisions motivated by the wrong motives are a potential risk. Researchers have started to pivot their focus towards the understanding of deep neural networks and the explanation of their predictions. In this paper, we provide a review of the current state of explainable biometrics based on the study of 47 papers and discuss comprehensively the direction in which this field should be developed.

* Submitted for review 
Viaarxiv icon

SYN-MAD 2022: Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data

Aug 15, 2022
Marco Huber, Fadi Boutros, Anh Thi Luu, Kiran Raja, Raghavendra Ramachandra, Naser Damer, Pedro C. Neto, Tiago Gonçalves, Ana F. Sequeira, Jaime S. Cardoso, João Tremoço, Miguel Lourenço, Sergio Serra, Eduardo Cermeño, Marija Ivanovska, Borut Batagelj, Andrej Kronovšek, Peter Peer, Vitomir Štruc

Figure 1 for SYN-MAD 2022: Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data
Figure 2 for SYN-MAD 2022: Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data
Figure 3 for SYN-MAD 2022: Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data
Figure 4 for SYN-MAD 2022: Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data

This paper presents a summary of the Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data (SYN-MAD) held at the 2022 International Joint Conference on Biometrics (IJCB 2022). The competition attracted a total of 12 participating teams, both from academia and industry and present in 11 different countries. In the end, seven valid submissions were submitted by the participating teams and evaluated by the organizers. The competition was held to present and attract solutions that deal with detecting face morphing attacks while protecting people's privacy for ethical and legal reasons. To ensure this, the training data was limited to synthetic data provided by the organizers. The submitted solutions presented innovations that led to outperforming the considered baseline in many experimental settings. The evaluation benchmark is now available at: https://github.com/marcohuber/SYN-MAD-2022.

* Accepted at International Joint Conference on Biometrics (IJCB) 2022 
Viaarxiv icon

OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions

Aug 15, 2022
Pedro C. Neto, Fadi Boutros, Joao Ribeiro Pinto, Naser Damer, Ana F. Sequeira, Jaime S. Cardoso, Messaoud Bengherabi, Abderaouf Bousnat, Sana Boucheta, Nesrine Hebbadj, Mustafa Ekrem Erakın, Uğur Demir, Hazım Kemal Ekenel, Pedro Beber de Queiroz Vidal, David Menotti

Figure 1 for OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions
Figure 2 for OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions
Figure 3 for OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions
Figure 4 for OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions

This work summarizes the IJCB Occluded Face Recognition Competition 2022 (IJCB-OCFR-2022) embraced by the 2022 International Joint Conference on Biometrics (IJCB 2022). OCFR-2022 attracted a total of 3 participating teams, from academia. Eventually, six valid submissions were submitted and then evaluated by the organizers. The competition was held to address the challenge of face recognition in the presence of severe face occlusions. The participants were free to use any training data and the testing data was built by the organisers by synthetically occluding parts of the face images using a well-known dataset. The submitted solutions presented innovations and performed very competitively with the considered baseline. A major output of this competition is a challenging, realistic, and diverse, and publicly available occluded face recognition benchmark with well defined evaluation protocols.

* Accepted at International Joint Conference on Biometrics 2022 
Viaarxiv icon