Alert button
Picture for Ambra Demontis

Ambra Demontis

Alert button

Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization

Oct 12, 2023
Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Figure 1 for Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Figure 2 for Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization
Viaarxiv icon

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks

Oct 12, 2023
Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Figure 1 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Figure 2 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Figure 3 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Figure 4 for Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Viaarxiv icon

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

Sep 13, 2023
Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 2 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 3 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 4 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Viaarxiv icon

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

Jul 01, 2023
Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 2 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 3 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 4 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Viaarxiv icon

A Survey on Reinforcement Learning Security with Application to Autonomous Driving

Dec 12, 2022
Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli

Figure 1 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 2 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 3 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Figure 4 for A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Viaarxiv icon

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

May 04, 2022
Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

Figure 1 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 2 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 3 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 4 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Viaarxiv icon

Machine Learning Security against Data Poisoning: Are We There Yet?

Apr 12, 2022
Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 2 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 3 for Machine Learning Security against Data Poisoning: Are We There Yet?
Viaarxiv icon

Energy-Latency Attacks via Sponge Poisoning

Apr 11, 2022
Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Energy-Latency Attacks via Sponge Poisoning
Figure 2 for Energy-Latency Attacks via Sponge Poisoning
Figure 3 for Energy-Latency Attacks via Sponge Poisoning
Figure 4 for Energy-Latency Attacks via Sponge Poisoning
Viaarxiv icon

ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Mar 07, 2022
Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 2 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 3 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Figure 4 for ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Viaarxiv icon

Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference

Aug 31, 2021
Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Maura Pintor, Battista Biggio, Fabio Roli

Figure 1 for Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference
Figure 2 for Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference
Figure 3 for Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference
Figure 4 for Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference
Viaarxiv icon