Alert button
Picture for Amit Giloni

Amit Giloni

Alert button

X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Jul 02, 2023
Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya Shimizu, Yuval Elovici, Asaf Shabtai

Figure 1 for X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail
Figure 2 for X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail
Figure 3 for X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail
Figure 4 for X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: i) detect adversarial samples in real time, allowing the defender to take preventive action; ii) provide explanations for the alerts raised to support the defender's decision-making process, and iii) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1,700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0% FPR (no false alarms) and providing actionable explanations for the alerts raised. A demo is available.

Viaarxiv icon

BENN: Bias Estimation Using Deep Neural Network

Dec 23, 2020
Amit Giloni, Edita Grolman, Tanja Hagemann, Ronald Fromm, Sebastian Fischer, Yuval Elovici, Asaf Shabtai

Figure 1 for BENN: Bias Estimation Using Deep Neural Network
Figure 2 for BENN: Bias Estimation Using Deep Neural Network
Figure 3 for BENN: Bias Estimation Using Deep Neural Network
Figure 4 for BENN: Bias Estimation Using Deep Neural Network

The need to detect bias in machine learning (ML) models has led to the development of multiple bias detection methods, yet utilizing them is challenging since each method: i) explores a different ethical aspect of bias, which may result in contradictory output among the different methods, ii) provides an output of a different range/scale and therefore, can't be compared with other methods, and iii) requires different input, and therefore a human expert needs to be involved to adjust each method according to the examined model. In this paper, we present BENN -- a novel bias estimation method that uses a pretrained unsupervised deep neural network. Given a ML model and data samples, BENN provides a bias estimation for every feature based on the model's predictions. We evaluated BENN using three benchmark datasets and one proprietary churn prediction model used by a European Telco and compared it with an ensemble of 21 existing bias estimation methods. Evaluation results highlight the significant advantages of BENN over the ensemble, as it is generic (i.e., can be applied to any ML model) and there is no need for a domain expert, yet it provides bias estimations that are aligned with those of the ensemble.

Viaarxiv icon