Picture for Yuval Elovici

Yuval Elovici

EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome

Add code
May 13, 2022
Figure 1 for EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome
Figure 2 for EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome
Figure 3 for EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome
Figure 4 for EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome
Viaarxiv icon

The Security of Deep Learning Defences for Medical Imaging

Add code
Jan 21, 2022
Figure 1 for The Security of Deep Learning Defences for Medical Imaging
Figure 2 for The Security of Deep Learning Defences for Medical Imaging
Figure 3 for The Security of Deep Learning Defences for Medical Imaging
Figure 4 for The Security of Deep Learning Defences for Medical Imaging
Viaarxiv icon

Adversarial Machine Learning Threat Analysis in Open Radio Access Networks

Add code
Jan 16, 2022
Viaarxiv icon

Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models

Add code
Nov 21, 2021
Figure 1 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Figure 2 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Figure 3 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Figure 4 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Viaarxiv icon

Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples

Add code
Oct 24, 2021
Figure 1 for Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples
Figure 2 for Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples
Figure 3 for Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples
Figure 4 for Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples
Viaarxiv icon

Dodging Attack Using Carefully Crafted Natural Makeup

Add code
Sep 14, 2021
Figure 1 for Dodging Attack Using Carefully Crafted Natural Makeup
Figure 2 for Dodging Attack Using Carefully Crafted Natural Makeup
Figure 3 for Dodging Attack Using Carefully Crafted Natural Makeup
Figure 4 for Dodging Attack Using Carefully Crafted Natural Makeup
Viaarxiv icon

A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems

Add code
Jul 05, 2021
Figure 1 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Figure 2 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Figure 3 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Figure 4 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Viaarxiv icon

The Threat of Offensive AI to Organizations

Add code
Jun 30, 2021
Figure 1 for The Threat of Offensive AI to Organizations
Figure 2 for The Threat of Offensive AI to Organizations
Figure 3 for The Threat of Offensive AI to Organizations
Figure 4 for The Threat of Offensive AI to Organizations
Viaarxiv icon

CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals

Add code
Jun 15, 2021
Figure 1 for CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals
Figure 2 for CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals
Figure 3 for CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals
Figure 4 for CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals
Viaarxiv icon

RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks

Add code
Jun 13, 2021
Figure 1 for RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
Figure 2 for RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
Figure 3 for RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
Figure 4 for RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
Viaarxiv icon