Alert button
Picture for Michael K. Reiter

Michael K. Reiter

Alert button

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

Feb 22, 2024
Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong

Viaarxiv icon

Mendata: A Framework to Purify Manipulated Training Data

Dec 03, 2023
Zonghao Huang, Neil Gong, Michael K. Reiter

Viaarxiv icon

Group-based Robustness: A General Framework for Customized Robustness in the Real World

Jun 29, 2023
Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

Figure 1 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 2 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 3 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Figure 4 for Group-based Robustness: A General Framework for Customized Robustness in the Real World
Viaarxiv icon

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Dec 28, 2021
Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

Figure 1 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 2 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 3 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Figure 4 for Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks
Viaarxiv icon

Defense Through Diverse Directions

Mar 24, 2020
Christopher M. Bender, Yang Li, Yifeng Shi, Michael K. Reiter, Junier B. Oliva

Figure 1 for Defense Through Diverse Directions
Figure 2 for Defense Through Diverse Directions
Figure 3 for Defense Through Diverse Directions
Figure 4 for Defense Through Diverse Directions
Viaarxiv icon

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection

Dec 19, 2019
Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, Saurabh Shintre

Figure 1 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 2 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 3 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Figure 4 for Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection
Viaarxiv icon

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

Dec 19, 2019
Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Figure 1 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 2 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 3 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Figure 4 for $n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
Viaarxiv icon

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

Jul 27, 2018
Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Figure 1 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 2 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 3 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Figure 4 for On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples
Viaarxiv icon

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Dec 31, 2017
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

Figure 1 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 2 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 3 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Figure 4 for Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
Viaarxiv icon

Stealing Machine Learning Models via Prediction APIs

Oct 03, 2016
Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart

Figure 1 for Stealing Machine Learning Models via Prediction APIs
Figure 2 for Stealing Machine Learning Models via Prediction APIs
Figure 3 for Stealing Machine Learning Models via Prediction APIs
Figure 4 for Stealing Machine Learning Models via Prediction APIs
Viaarxiv icon