Alert button
Picture for Lujo Bauer

Lujo Bauer

Alert button

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Add code
Bookmark button
Alert button
Jun 29, 2023
Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

Figure 1 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 2 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 3 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 4 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Viaarxiv icon

Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators

Add code
Bookmark button
Alert button
Feb 27, 2023
Keane Lucas, Matthew Jagielski, Florian Tramèr, Lujo Bauer, Nicholas Carlini

Figure 1 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 2 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 3 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 4 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Viaarxiv icon

Certified Robustness of Learning-based Static Malware Detectors

Add code
Bookmark button
Alert button
Jan 31, 2023
Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, Benjamin I. P. Rubinstein

Figure 1 for Certified Robustness of Learning-based Static Malware Detectors
Figure 2 for Certified Robustness of Learning-based Static Malware Detectors
Figure 3 for Certified Robustness of Learning-based Static Malware Detectors
Figure 4 for Certified Robustness of Learning-based Static Malware Detectors
Viaarxiv icon

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Add code
Bookmark button
Alert button
Dec 28, 2021
Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

Figure 1 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 2 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 3 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 4 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Viaarxiv icon

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection

Add code
Bookmark button
Alert button
Dec 19, 2019
Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, Saurabh Shintre

Figure 1 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 2 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 3 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 4 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Viaarxiv icon

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

Add code
Bookmark button
Alert button
Dec 19, 2019
Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Figure 1 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 2 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 3 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 4 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Viaarxiv icon

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Add code
Bookmark button
Alert button
Jul 27, 2018
Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Figure 1 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 2 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 3 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 4 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Viaarxiv icon

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Add code
Bookmark button
Alert button
Dec 31, 2017
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

Figure 1 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 2 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 3 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 4 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Viaarxiv icon