Alert button
Picture for Aidan Boyd

Aidan Boyd

Alert button

Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition

Add code
Bookmark button
Alert button
Oct 06, 2023
Patrick Tinsley, Sandip Purnapatra, Mahsa Mitcheff, Aidan Boyd, Colton Crum, Kevin Bowyer, Patrick Flynn, Stephanie Schuckers, Adam Czajka, Meiling Fang, Naser Damer, Xingyu Liu, Caiyong Wang, Xianyun Sun, Zhaohua Chang, Xinyue Li, Guangzhe Zhao, Juan Tapia, Christoph Busch, Carlos Aravena, Daniel Schulz

Figure 1 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Figure 2 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Figure 3 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Figure 4 for Iris Liveness Detection Competition (LivDet-Iris) -- The 2023 Edition
Viaarxiv icon

Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training

Add code
Bookmark button
Alert button
Jun 08, 2023
Colton R. Crum, Aidan Boyd, Kevin Bowyer, Adam Czajka

Figure 1 for Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training
Figure 2 for Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training
Figure 3 for Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training
Figure 4 for Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training
Viaarxiv icon

Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models

Add code
Bookmark button
Alert button
Mar 27, 2023
Colton Crum, Patrick Tinsley, Aidan Boyd, Jacob Piland, Christopher Sweet, Timothy Kelley, Kevin Bowyer, Adam Czajka

Figure 1 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Figure 2 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Figure 3 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Figure 4 for Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models
Viaarxiv icon

State Of The Art In Open-Set Iris Presentation Attack Detection

Add code
Bookmark button
Alert button
Aug 22, 2022
Aidan Boyd, Jeremy Speth, Lucas Parzianello, Kevin Bowyer, Adam Czajka

Figure 1 for State Of The Art In Open-Set Iris Presentation Attack Detection
Figure 2 for State Of The Art In Open-Set Iris Presentation Attack Detection
Figure 3 for State Of The Art In Open-Set Iris Presentation Attack Detection
Figure 4 for State Of The Art In Open-Set Iris Presentation Attack Detection
Viaarxiv icon

The Value of AI Guidance in Human Examination of Synthetically-Generated Faces

Add code
Bookmark button
Alert button
Aug 22, 2022
Aidan Boyd, Patrick Tinsley, Kevin Bowyer, Adam Czajka

Figure 1 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Figure 2 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Figure 3 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Figure 4 for The Value of AI Guidance in Human Examination of Synthetically-Generated Faces
Viaarxiv icon

Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition

Add code
Bookmark button
Alert button
Aug 03, 2022
Aidan Boyd, Daniel Moreira, Andrey Kuehlkamp, Kevin Bowyer, Adam Czajka

Figure 1 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Figure 2 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Figure 3 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Figure 4 for Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem Iris Recognition
Viaarxiv icon

Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition

Add code
Bookmark button
Alert button
Dec 20, 2021
Andrey Kuehlkamp, Aidan Boyd, Adam Czajka, Kevin Bowyer, Patrick Flynn, Dennis Chute, Eric Benjamin

Figure 1 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Figure 2 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Figure 3 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Figure 4 for Interpretable Deep Learning-Based Forensic Iris Segmentation and Recognition
Viaarxiv icon

CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning

Add code
Bookmark button
Alert button
Dec 01, 2021
Aidan Boyd, Patrick Tinsley, Kevin Bowyer, Adam Czajka

Figure 1 for CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning
Figure 2 for CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning
Figure 3 for CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning
Figure 4 for CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning
Viaarxiv icon

Human-Aided Saliency Maps Improve Generalization of Deep Learning

Add code
Bookmark button
Alert button
May 07, 2021
Aidan Boyd, Kevin Bowyer, Adam Czajka

Figure 1 for Human-Aided Saliency Maps Improve Generalization of Deep Learning
Figure 2 for Human-Aided Saliency Maps Improve Generalization of Deep Learning
Figure 3 for Human-Aided Saliency Maps Improve Generalization of Deep Learning
Figure 4 for Human-Aided Saliency Maps Improve Generalization of Deep Learning
Viaarxiv icon