Picture for Lujo Bauer

Lujo Bauer

Sales Whisperer: A Human-Inconspicuous Attack on LLM Brand Recommendations

Add code
Jun 07, 2024
Figure 1 for Sales Whisperer: A Human-Inconspicuous Attack on LLM Brand Recommendations
Figure 2 for Sales Whisperer: A Human-Inconspicuous Attack on LLM Brand Recommendations
Figure 3 for Sales Whisperer: A Human-Inconspicuous Attack on LLM Brand Recommendations
Figure 4 for Sales Whisperer: A Human-Inconspicuous Attack on LLM Brand Recommendations
Viaarxiv icon

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Add code
Jun 29, 2023
Figure 1 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 2 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 3 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 4 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Viaarxiv icon

Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators

Add code
Feb 27, 2023
Figure 1 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 2 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 3 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 4 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Viaarxiv icon

Certified Robustness of Learning-based Static Malware Detectors

Add code
Jan 31, 2023
Figure 1 for Certified Robustness of Learning-based Static Malware Detectors
Figure 2 for Certified Robustness of Learning-based Static Malware Detectors
Figure 3 for Certified Robustness of Learning-based Static Malware Detectors
Figure 4 for Certified Robustness of Learning-based Static Malware Detectors
Viaarxiv icon

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Add code
Dec 28, 2021
Figure 1 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 2 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 3 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 4 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Viaarxiv icon

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection

Add code
Dec 19, 2019
Figure 1 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 2 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 3 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 4 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Viaarxiv icon

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

Add code
Dec 19, 2019
Figure 1 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 2 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 3 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 4 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Viaarxiv icon

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Add code
Jul 27, 2018
Figure 1 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 2 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 3 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 4 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Viaarxiv icon

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Add code
Dec 31, 2017
Figure 1 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 2 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 3 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 4 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Viaarxiv icon