Alert button
Picture for Luca Demetrio

Luca Demetrio

Alert button

Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors

Oct 14, 2023
Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio

Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage. Nevertheless, the attacks recently proposed have demonstrated limited effectiveness due to their lack of optimizing the usage of the adopted manipulations, and they focus solely on specific elements of the HTML code. In this work, we overcome these limitations by first designing a novel set of fine-grained manipulations which allow to modify the HTML code of the input phishing webpage without compromising its maliciousness and visual appearance, i.e., the manipulations are functionality- and rendering-preserving by design. We then select which manipulations should be applied to bypass the target detector by a query-efficient black-box optimization algorithm. Our experiments show that our attacks are able to raze to the ground the performance of current state-of-the-art ML-PWD using just 30 queries, thus overcoming the weaker attacks developed in previous work, and enabling a much fairer robustness evaluation of ML-PWD.

* Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security (AISec '23), November 30, 2023, Copenhagen, Denmark 
Viaarxiv icon

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

Sep 13, 2023
Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 2 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 3 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 4 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

RGB-D object recognition systems improve their predictive performances by fusing color and depth information, outperforming neural network architectures that rely solely on colors. While RGB-D systems are expected to be more robust to adversarial examples than RGB-only systems, they have also been proven to be highly vulnerable. Their robustness is similar even when the adversarial examples are generated by altering only the original images' colors. Different works highlighted the vulnerability of RGB-D systems; however, there is a lacking of technical explanations for this weakness. Hence, in our work, we bridge this gap by investigating the learned deep representation of RGB-D systems, discovering that color features make the function learned by the network more complex and, thus, more sensitive to small perturbations. To mitigate this problem, we propose a defense based on a detection mechanism that makes RGB-D systems more robust against adversarial examples. We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.

* Accepted for publication in the Information Sciences journal 
Viaarxiv icon

Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning

Aug 17, 2023
Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti, Battista Biggio

ModSecurity is widely recognized as the standard open-source Web Application Firewall (WAF), maintained by the OWASP Foundation. It detects malicious requests by matching them against the Core Rule Set, identifying well-known attack patterns. Each rule in the CRS is manually assigned a weight, based on the severity of the corresponding attack, and a request is detected as malicious if the sum of the weights of the firing rules exceeds a given threshold. In this work, we show that this simple strategy is largely ineffective for detecting SQL injection (SQLi) attacks, as it tends to block many legitimate requests, while also being vulnerable to adversarial SQLi attacks, i.e., attacks intentionally manipulated to evade detection. To overcome these issues, we design a robust machine learning model, named AdvModSec, which uses the CRS rules as input features, and it is trained to detect adversarial SQLi attacks. Our experiments show that AdvModSec, being trained on the traffic directed towards the protected web services, achieves a better trade-off between detection and false positive rates, improving the detection rate of the vanilla version of ModSecurity with CRS by 21%. Moreover, our approach is able to improve its adversarial robustness against adversarial SQLi attacks by 42%, thereby taking a step forward towards building more robust and trustworthy WAFs.

Viaarxiv icon

A Survey on Reinforcement Learning Security with Application to Autonomous Driving

Dec 12, 2022
Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli

Figure 1 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 2 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 3 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 4 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving

Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.

Viaarxiv icon

Explaining Machine Learning DGA Detectors from DNS Traffic Data

Aug 10, 2022
Giorgio Piras, Maura Pintor, Luca Demetrio, Battista Biggio

Figure 1 for Explaining Machine Learning DGA Detectors from DNS Traffic Data
Figure 2 for Explaining Machine Learning DGA Detectors from DNS Traffic Data
Figure 3 for Explaining Machine Learning DGA Detectors from DNS Traffic Data
Figure 4 for Explaining Machine Learning DGA Detectors from DNS Traffic Data

One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker. This attack is made by leveraging the Domain Name System (DNS) technology through Domain Generation Algorithms (DGAs), a stealthy connection strategy that yet leaves suspicious data patterns. To detect such threats, advances in their analysis have been made. For the majority, they found Machine Learning (ML) as a solution, which can be highly effective in analyzing and classifying massive amounts of data. Although strongly performing, ML models have a certain degree of obscurity in their decision-making process. To cope with this problem, a branch of ML known as Explainable ML tries to break down the black-box nature of classifiers and make them interpretable and human-readable. This work addresses the problem of Explainable ML in the context of botnet and DGA detection, which at the best of our knowledge, is the first to concretely break down the decisions of ML classifiers when devised for botnet/DGA detection, therefore providing global and local explanations.

Viaarxiv icon

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

Jul 12, 2022
Luca Demetrio, Battista Biggio, Fabio Roli

Figure 1 for Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
Figure 2 for Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
Figure 3 for Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
Figure 4 for Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures and tools for evaluating its security in different application contexts. In this article, we discuss how to develop automated and scalable security evaluations of machine learning using practical attacks, reporting a use case on Windows malware detection.

* IEEE Security & Privacy, 2022  
Viaarxiv icon

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

May 26, 2022
Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Figure 1 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
Figure 2 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
Figure 3 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation
Figure 4 for Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. The proposed attacks aimed solely at compromising the models' integrity (i.e., trustworthiness of the model's prediction), while adversarial attacks targeting the models' availability, a critical aspect in safety-critical domains such as autonomous driving, have not been explored by the machine learning research community. In this paper, we propose NMS-Sponge, a novel approach that negatively affects the decision latency of YOLO, a state-of-the-art object detector, and compromises the model's availability by applying a universal adversarial perturbation (UAP). In our experiments, we demonstrate that the proposed UAP is able to increase the processing time of individual frames by adding "phantom" objects while preserving the detection of the original objects.

Viaarxiv icon

ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Mar 07, 2022
Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 2 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 3 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 4 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a dataset to benchmark machine-learning models against adversarial patches. It consists of a set of patches, optimized to generalize across different models, and readily applicable to ImageNet data after preprocessing them with affine transformations. This process enables an approximate yet faster robustness evaluation, leveraging the transferability of adversarial perturbations. We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models. We conclude by discussing how our dataset could be used as a benchmark for robustness, and how our methodology can be generalized to other domains. We open source our dataset and evaluation code at https://github.com/pralab/ImageNet-Patch.

Viaarxiv icon