Alert button
Picture for Mahmood Sharif

Mahmood Sharif

Alert button

Adversarial Robustness Through Artifact Design

Add code
Bookmark button
Alert button
Feb 07, 2024
Tsufit Shua, Mahmood Sharif

Viaarxiv icon

The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations

Add code
Bookmark button
Alert button
Dec 18, 2023
Zebin Yun, Achi-Or Weingarten, Eyal Ronen, Mahmood Sharif

Viaarxiv icon

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Add code
Bookmark button
Alert button
Jun 29, 2023
Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

Figure 1 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 2 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 3 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 4 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Viaarxiv icon

Scalable Verification of GNN-based Job Schedulers

Add code
Bookmark button
Alert button
Mar 07, 2022
Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep Singh

Figure 1 for Scalable Verification of GNN-based Job Schedulers
Figure 2 for Scalable Verification of GNN-based Job Schedulers
Figure 3 for Scalable Verification of GNN-based Job Schedulers
Figure 4 for Scalable Verification of GNN-based Job Schedulers
Viaarxiv icon

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Add code
Bookmark button
Alert button
Dec 28, 2021
Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

Figure 1 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 2 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 3 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 4 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Viaarxiv icon

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection

Add code
Bookmark button
Alert button
Dec 19, 2019
Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, Saurabh Shintre

Figure 1 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 2 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 3 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 4 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Viaarxiv icon

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

Add code
Bookmark button
Alert button
Dec 19, 2019
Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Figure 1 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 2 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 3 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 4 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Viaarxiv icon

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Add code
Bookmark button
Alert button
Jul 27, 2018
Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Figure 1 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 2 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 3 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 4 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Viaarxiv icon

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Add code
Bookmark button
Alert button
Dec 31, 2017
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

Figure 1 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 2 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 3 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 4 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Viaarxiv icon