Alert button
Picture for Sudipta Banerjee

Sudipta Banerjee

Alert button

Identity-Preserving Aging of Face Images via Latent Diffusion Models

Jul 17, 2023
Sudipta Banerjee, Govind Mittal, Ameya Joshi, Chinmay Hegde, Nasir Memon

Figure 1 for Identity-Preserving Aging of Face Images via Latent Diffusion Models
Figure 2 for Identity-Preserving Aging of Face Images via Latent Diffusion Models
Figure 3 for Identity-Preserving Aging of Face Images via Latent Diffusion Models
Figure 4 for Identity-Preserving Aging of Face Images via Latent Diffusion Models

The performance of automated face recognition systems is inevitably impacted by the facial aging process. However, high quality datasets of individuals collected over several years are typically small in scale. In this work, we propose, train, and validate the use of latent text-to-image diffusion models for synthetically aging and de-aging face images. Our models succeed with few-shot training, and have the added benefit of being controllable via intuitive textual prompting. We observe high degrees of visual realism in the generated images while maintaining biometric fidelity measured by commonly used metrics. We evaluate our method on two benchmark datasets (CelebA and AgeDB) and observe significant reduction (~44%) in the False Non-Match Rate compared to existing state-of the-art baselines.

* Accepted to appear in International Joint Conference in Biometrics (IJCB) 2023 
Viaarxiv icon

Generating Adversarial Attacks in the Latent Space

Apr 10, 2023
Nitish Shukla, Sudipta Banerjee

Figure 1 for Generating Adversarial Attacks in the Latent Space
Figure 2 for Generating Adversarial Attacks in the Latent Space
Figure 3 for Generating Adversarial Attacks in the Latent Space
Figure 4 for Generating Adversarial Attacks in the Latent Space

Adversarial attacks in the input (pixel) space typically incorporate noise margins such as $L_1$ or $L_{\infty}$-norm to produce imperceptibly perturbed data that confound deep learning networks. Such noise margins confine the magnitude of permissible noise. In this work, we propose injecting adversarial perturbations in the latent (feature) space using a generative adversarial network, removing the need for margin-based priors. Experiments on MNIST, CIFAR10, Fashion-MNIST, CIFAR100 and Stanford Dogs datasets support the effectiveness of the proposed method in generating adversarial attacks in the latent space while ensuring a high degree of visual realism with respect to pixel-based adversarial attack methods.

Viaarxiv icon

Can GAN-induced Attribute Manipulations Impact Face Recognition?

Sep 07, 2022
Sudipta Banerjee, Aditi Aggarwal, Arun Ross

Figure 1 for Can GAN-induced Attribute Manipulations Impact Face Recognition?
Figure 2 for Can GAN-induced Attribute Manipulations Impact Face Recognition?
Figure 3 for Can GAN-induced Attribute Manipulations Impact Face Recognition?
Figure 4 for Can GAN-induced Attribute Manipulations Impact Face Recognition?

Impact due to demographic factors such as age, sex, race, etc., has been studied extensively in automated face recognition systems. However, the impact of \textit{digitally modified} demographic and facial attributes on face recognition is relatively under-explored. In this work, we study the effect of attribute manipulations induced via generative adversarial networks (GANs) on face recognition performance. We conduct experiments on the CelebA dataset by intentionally modifying thirteen attributes using AttGAN and STGAN and evaluating their impact on two deep learning-based face verification methods, ArcFace and VGGFace. Our findings indicate that some attribute manipulations involving eyeglasses and digital alteration of sex cues can significantly impair face recognition by up to 73% and need further analysis.

Viaarxiv icon

Facial De-morphing: Extracting Component Faces from a Single Morph

Sep 07, 2022
Sudipta Banerjee, Prateek Jaiswal, Arun Ross

Figure 1 for Facial De-morphing: Extracting Component Faces from a Single Morph
Figure 2 for Facial De-morphing: Extracting Component Faces from a Single Morph
Figure 3 for Facial De-morphing: Extracting Component Faces from a Single Morph
Figure 4 for Facial De-morphing: Extracting Component Faces from a Single Morph

A face morph is created by strategically combining two or more face images corresponding to multiple identities. The intention is for the morphed image to match with multiple identities. Current morph attack detection strategies can detect morphs but cannot recover the images or identities used in creating them. The task of deducing the individual face images from a morphed face image is known as \textit{de-morphing}. Existing work in de-morphing assume the availability of a reference image pertaining to one identity in order to recover the image of the accomplice - i.e., the other identity. In this work, we propose a novel de-morphing method that can recover images of both identities simultaneously from a single morphed face image without needing a reference image or prior information about the morphing process. We propose a generative adversarial network that achieves single image-based de-morphing with a surprisingly high degree of visual realism and biometric similarity with the original face images. We demonstrate the performance of our method on landmark-based morphs and generative model-based morphs with promising results.

Viaarxiv icon

Conditional Identity Disentanglement for Differential Face Morph Detection

Jul 05, 2021
Sudipta Banerjee, Arun Ross

Figure 1 for Conditional Identity Disentanglement for Differential Face Morph Detection
Figure 2 for Conditional Identity Disentanglement for Differential Face Morph Detection
Figure 3 for Conditional Identity Disentanglement for Differential Face Morph Detection
Figure 4 for Conditional Identity Disentanglement for Differential Face Morph Detection

We present the task of differential face morph attack detection using a conditional generative network (cGAN). To determine whether a face image in an identification document, such as a passport, is morphed or not, we propose an algorithm that learns to implicitly disentangle identities from the morphed image conditioned on the trusted reference image using the cGAN. Furthermore, the proposed method can also recover some underlying information about the second subject used in generating the morph. We performed experiments on AMSL face morph, MorGAN, and EMorGAN datasets to demonstrate the effectiveness of the proposed method. We also conducted cross-dataset and cross-attack detection experiments. We obtained promising results of 3% BPCER @ 10% APCER on intra-dataset evaluation, which is comparable to existing methods; and 4.6% BPCER @ 10% APCER on cross-dataset evaluation, which outperforms state-of-the-art methods by at least 13.9%.

Viaarxiv icon

One-shot Representational Learning for Joint Biometric and Device Authentication

Jan 02, 2021
Sudipta Banerjee, Arun Ross

Figure 1 for One-shot Representational Learning for Joint Biometric and Device Authentication
Figure 2 for One-shot Representational Learning for Joint Biometric and Device Authentication
Figure 3 for One-shot Representational Learning for Joint Biometric and Device Authentication
Figure 4 for One-shot Representational Learning for Joint Biometric and Device Authentication

In this work, we propose a method to simultaneously perform (i) biometric recognition (i.e., identify the individual), and (ii) device recognition, (i.e., identify the device) from a single biometric image, say, a face image, using a one-shot schema. Such a joint recognition scheme can be useful in devices such as smartphones for enhancing security as well as privacy. We propose to automatically learn a joint representation that encapsulates both biometric-specific and sensor-specific features. We evaluate the proposed approach using iris, face and periocular images acquired using near-infrared iris sensors and smartphone cameras. Experiments conducted using 14,451 images from 15 sensors resulted in a rank-1 identification accuracy of upto 99.81% and a verification accuracy of upto 100% at a false match rate of 1%.

* Accepted in 25th International Conference on Pattern Recognition (ICPR), (Milan, Italy), January 2021 
Viaarxiv icon

Smartphone Camera De-identification while Preserving Biometric Utility

Sep 17, 2020
Sudipta Banerjee, Arun Ross

Figure 1 for Smartphone Camera De-identification while Preserving Biometric Utility
Figure 2 for Smartphone Camera De-identification while Preserving Biometric Utility
Figure 3 for Smartphone Camera De-identification while Preserving Biometric Utility
Figure 4 for Smartphone Camera De-identification while Preserving Biometric Utility

The principle of Photo Response Non Uniformity (PRNU) is often exploited to deduce the identity of the smartphone device whose camera or sensor was used to acquire a certain image. In this work, we design an algorithm that perturbs a face image acquired using a smartphone camera such that (a) sensor-specific details pertaining to the smartphone camera are suppressed (sensor anonymization); (b) the sensor pattern of a different device is incorporated (sensor spoofing); and (c) biometric matching using the perturbed image is not affected (biometric utility). We employ a simple approach utilizing Discrete Cosine Transform to achieve the aforementioned objectives. Experiments conducted on the MICHE-I and OULU-NPU datasets, which contain periocular and facial data acquired using 12 smartphone cameras, demonstrate the efficacy of the proposed de-identification algorithm on three different PRNU-based sensor identification schemes. This work has application in sensor forensics and personal privacy.

* Proc. of 10th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Tampa, USA), September 2019  
Viaarxiv icon

Face Phylogeny Tree Using Basis Functions

Mar 13, 2020
Sudipta Banerjee, Arun Ross

Figure 1 for Face Phylogeny Tree Using Basis Functions
Figure 2 for Face Phylogeny Tree Using Basis Functions
Figure 3 for Face Phylogeny Tree Using Basis Functions
Figure 4 for Face Phylogeny Tree Using Basis Functions

Photometric transformations, such as brightness and contrast adjustment, can be applied to a face image repeatedly creating a set of near-duplicate images. Identifying the original image from a set of such near-duplicates and deducing the relationship between them are important in the context of digital image forensics. This is commonly done by generating an image phylogeny tree \textemdash \hspace{0.08cm} a hierarchical structure depicting the relationship between a set of near-duplicate images. In this work, we utilize three different families of basis functions to model pairwise relationships between near-duplicate images. The basis functions used in this work are orthogonal polynomials, wavelet basis functions and radial basis functions. We perform extensive experiments to assess the performance of the proposed method across three different modalities, namely, face, fingerprint and iris images; across different image phylogeny tree configurations; and across different types of photometric transformations. We also utilize the same basis functions to model geometric transformations and deep-learning based transformations. We also perform extensive analysis of each basis function with respect to its ability to model arbitrary transformations and to distinguish between the original and the transformed images. Finally, we utilize the concept of approximate von Neumann graph entropy to explain the success and failure cases of the proposed IPT generation algorithm. Experiments indicate that the proposed algorithm generalizes well across different scenarios thereby suggesting the merits of using basis functions to model the relationship between photometrically and geometrically modified images.

* Updated paper particulalrly Section 4.2.7 
Viaarxiv icon

Some Research Problems in Biometrics: The Future Beckons

May 12, 2019
Arun Ross, Sudipta Banerjee, Cunjian Chen, Anurag Chowdhury, Vahid Mirjalili, Renu Sharma, Thomas Swearingen, Shivangi Yadav

Figure 1 for Some Research Problems in Biometrics: The Future Beckons
Figure 2 for Some Research Problems in Biometrics: The Future Beckons
Figure 3 for Some Research Problems in Biometrics: The Future Beckons
Figure 4 for Some Research Problems in Biometrics: The Future Beckons

The need for reliably determining the identity of a person is critical in a number of different domains ranging from personal smartphones to border security; from autonomous vehicles to e-voting; from tracking child vaccinations to preventing human trafficking; from crime scene investigation to personalization of customer service. Biometrics, which entails the use of biological attributes such as face, fingerprints and voice for recognizing a person, is being increasingly used in several such applications. While biometric technology has made rapid strides over the past decade, there are several fundamental issues that are yet to be satisfactorily resolved. In this article, we will discuss some of these issues and enumerate some of the exciting challenges in this field.

* 8 pages, 12 figures, ICB-2019 
Viaarxiv icon