Alert button
Picture for Alessandro Biondi

Alessandro Biondi

Alert button

Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications

Add code
Bookmark button
Alert button
Nov 19, 2023
Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

Viaarxiv icon

Robust-by-Design Classification via Unitary-Gradient Neural Networks

Add code
Bookmark button
Alert button
Sep 09, 2022
Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 2 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 3 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Figure 4 for Robust-by-Design Classification via Unitary-Gradient Neural Networks
Viaarxiv icon

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Add code
Bookmark button
Alert button
Jun 09, 2022
Federico Nesti, Giulio Rossolini, Gianluca D'Amico, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 2 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 3 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Figure 4 for CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models
Viaarxiv icon

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

Add code
Bookmark button
Alert button
Mar 14, 2022
Giulio Rossolini, Federico Nesti, Fabio Brau, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 2 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 3 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Figure 4 for Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Viaarxiv icon

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

Add code
Bookmark button
Alert button
Jan 05, 2022
Giulio Rossolini, Federico Nesti, Gianluca D'Amico, Saasha Nair, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 2 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 3 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Figure 4 for On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving
Viaarxiv icon

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Add code
Bookmark button
Alert button
Jan 04, 2022
Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 2 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 3 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Figure 4 for On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
Viaarxiv icon

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Add code
Bookmark button
Alert button
Aug 13, 2021
Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 2 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 3 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Figure 4 for Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks
Viaarxiv icon

Increasing the Confidence of Deep Neural Networks by Coverage Analysis

Add code
Bookmark button
Alert button
Jan 28, 2021
Giulio Rossolini, Alessandro Biondi, Giorgio Carlo Buttazzo

Figure 1 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Figure 2 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Figure 3 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Figure 4 for Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Viaarxiv icon

Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting

Add code
Bookmark button
Alert button
Jan 27, 2021
Federico Nesti, Alessandro Biondi, Giorgio Buttazzo

Figure 1 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Figure 2 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Figure 3 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Figure 4 for Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
Viaarxiv icon