Alert button
Picture for Alon Zolfi

Alon Zolfi

Alert button

YolOOD: Utilizing Object Detection Concepts for Out-of-Distribution Detection

Dec 05, 2022
Alon Zolfi, Guy Amit, Amit Baras, Satoru Koda, Ikuya Morikawa, Yuval Elovici, Asaf Shabtai

Figure 1 for YolOOD: Utilizing Object Detection Concepts for Out-of-Distribution Detection
Figure 2 for YolOOD: Utilizing Object Detection Concepts for Out-of-Distribution Detection
Figure 3 for YolOOD: Utilizing Object Detection Concepts for Out-of-Distribution Detection
Figure 4 for YolOOD: Utilizing Object Detection Concepts for Out-of-Distribution Detection

Out-of-distribution (OOD) detection has attracted a large amount of attention from the machine learning research community in recent years due to its importance in deployed systems. Most of the previous studies focused on the detection of OOD samples in the multi-class classification task. However, OOD detection in the multi-label classification task remains an underexplored domain. In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task. Object detection models have an inherent ability to distinguish between objects of interest (in-distribution) and irrelevant objects (e.g., OOD objects) on images that contain multiple objects from different categories. These abilities allow us to convert a regular object detection model into an image classifier with inherent OOD detection capabilities with just minor changes. We compare our approach to state-of-the-art OOD detection methods and demonstrate YolOOD's ability to outperform these methods on a comprehensive suite of in-distribution and OOD benchmark datasets.

* 10 pages, 4 figures 
Viaarxiv icon

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

Nov 16, 2022
Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici, Asaf Shabtai

Figure 1 for Attacking Object Detector Using A Universal Targeted Label-Switch Patch
Figure 2 for Attacking Object Detector Using A Universal Targeted Label-Switch Patch
Figure 3 for Attacking Object Detector Using A Universal Targeted Label-Switch Patch
Figure 4 for Attacking Object Detector Using A Universal Targeted Label-Switch Patch

Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years. These attacks cause the model to make incorrect predictions by placing a patch containing an adversarial pattern on the target object or anywhere within the frame. However, none of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object. In this study, we propose a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO. In our attack, we use (i) a tailored projection function to enable the placement of the adversarial patch on multiple target objects in the image (e.g., cars), each of which may be located a different distance away from the camera or have a different view angle relative to the camera, and (ii) a unique loss function capable of changing the label of the attacked objects. The proposed universal patch, which is trained in the digital domain, is transferable to the physical domain. We performed an extensive evaluation using different types of object detectors, different video streams captured by different cameras, and various target classes, and evaluated different configurations of the adversarial patch in the physical domain.

Viaarxiv icon

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

May 26, 2022
Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Figure 1 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
Figure 2 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
Figure 3 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
Figure 4 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. The proposed attacks aimed solely at compromising the models' integrity (i.e., trustworthiness of the model's prediction), while adversarial attacks targeting the models' availability, a critical aspect in safety-critical domains such as autonomous driving, have not been explored by the machine learning research community. In this paper, we propose NMS-Sponge, a novel approach that negatively affects the decision latency of YOLO, a state-of-the-art object detector, and compromises the model's availability by applying a universal adversarial perturbation (UAP). In our experiments, we demonstrate that the proposed UAP is able to increase the processing time of individual frames by adding "phantom" objects while preserving the detection of the original objects.

Viaarxiv icon

Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models

Nov 21, 2021
Alon Zolfi, Shai Avidan, Yuval Elovici, Asaf Shabtai

Figure 1 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Figure 2 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Figure 3 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models
Figure 4 for Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models

Deep learning-based facial recognition (FR) models have demonstrated state-of-the-art performance in the past few years, even when wearing protective medical face masks became commonplace during the COVID-19 pandemic. Given the outstanding performance of these models, the machine learning research community has shown increasing interest in challenging their robustness. Initially, researchers presented adversarial attacks in the digital domain, and later the attacks were transferred to the physical domain. However, in many cases, attacks in the physical domain are conspicuous, requiring, for example, the placement of a sticker on the face, and thus may raise suspicion in real-world environments (e.g., airports). In this paper, we propose Adversarial Mask, a physical adversarial universal perturbation (UAP) against state-of-the-art FR models that is applied on face masks in the form of a carefully crafted pattern. In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets. In addition, we validated our adversarial mask effectiveness in real-world experiments by printing the adversarial pattern on a fabric medical face mask, causing the FR system to identify only 3.34% of the participants wearing the mask (compared to a minimum of 83.34% with other evaluated masks).

Viaarxiv icon

The Translucent Patch: A Physical and Universal Attack on Object Detectors

Dec 23, 2020
Alon Zolfi, Moshe Kravchik, Yuval Elovici, Asaf Shabtai

Figure 1 for The Translucent Patch: A Physical and Universal Attack on Object Detectors
Figure 2 for The Translucent Patch: A Physical and Universal Attack on Object Detectors
Figure 3 for The Translucent Patch: A Physical and Universal Attack on Object Detectors
Figure 4 for The Translucent Patch: A Physical and Universal Attack on Object Detectors

Physical adversarial attacks against object detectors have seen increasing success in recent years. However, these attacks require direct access to the object of interest in order to apply a physical patch. Furthermore, to hide multiple objects, an adversarial patch must be applied to each object. In this paper, we propose a contactless translucent physical patch containing a carefully constructed pattern, which is placed on the camera's lens, to fool state-of-the-art object detectors. The primary goal of our patch is to hide all instances of a selected target class. In addition, the optimization method used to construct the patch aims to ensure that the detection of other (untargeted) classes remains unharmed. Therefore, in our experiments, which are conducted on state-of-the-art object detection models used in autonomous driving, we study the effect of the patch on the detection of both the selected target class and the other classes. We show that our patch was able to prevent the detection of 42.27% of all stop sign instances while maintaining high (nearly 80%) detection of the other classes.

Viaarxiv icon