Alert button
Picture for Antonio Emanuele Cinà

Antonio Emanuele Cinà

Alert button

$σ$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples

Add code
Bookmark button
Alert button
Feb 02, 2024
Antonio Emanuele Cinà, Francesco Villani, Maura Pintor, Lea Schönherr, Battista Biggio, Marcello Pelillo

Viaarxiv icon

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

Add code
Bookmark button
Alert button
Sep 13, 2023
Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

Figure 1 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 2 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 3 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Figure 4 for Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
Viaarxiv icon

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

Add code
Bookmark button
Alert button
Jul 01, 2023
Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 2 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 3 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Figure 4 for Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
Viaarxiv icon

On the Limitations of Model Stealing with Uncertainty Quantification Models

Add code
Bookmark button
Alert button
May 09, 2023
David Pape, Sina Däubener, Thorsten Eisenhofer, Antonio Emanuele Cinà, Lea Schönherr

Figure 1 for On the Limitations of Model Stealing with Uncertainty Quantification Models
Figure 2 for On the Limitations of Model Stealing with Uncertainty Quantification Models
Figure 3 for On the Limitations of Model Stealing with Uncertainty Quantification Models
Viaarxiv icon

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

Add code
Bookmark button
Alert button
May 04, 2022
Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

Figure 1 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 2 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 3 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 4 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Viaarxiv icon

Machine Learning Security against Data Poisoning: Are We There Yet?

Add code
Bookmark button
Alert button
Apr 12, 2022
Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 2 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 3 for Machine Learning Security against Data Poisoning: Are We There Yet?
Viaarxiv icon

Energy-Latency Attacks via Sponge Poisoning

Add code
Bookmark button
Alert button
Apr 11, 2022
Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Energy-Latency Attacks via Sponge Poisoning
Figure 2 for Energy-Latency Attacks via Sponge Poisoning
Figure 3 for Energy-Latency Attacks via Sponge Poisoning
Figure 4 for Energy-Latency Attacks via Sponge Poisoning
Viaarxiv icon

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

Add code
Bookmark button
Alert button
Jun 14, 2021
Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 2 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 3 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 4 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Viaarxiv icon

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

Add code
Bookmark button
Alert button
Mar 23, 2021
Antonio Emanuele Cinà, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Figure 1 for The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Figure 2 for The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Figure 3 for The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Figure 4 for The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Viaarxiv icon

A Black-box Adversarial Attack for Poisoning Clustering

Add code
Bookmark button
Alert button
Sep 09, 2020
Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo

Figure 1 for A Black-box Adversarial Attack for Poisoning Clustering
Figure 2 for A Black-box Adversarial Attack for Poisoning Clustering
Figure 3 for A Black-box Adversarial Attack for Poisoning Clustering
Figure 4 for A Black-box Adversarial Attack for Poisoning Clustering
Viaarxiv icon