Alert button
Picture for Tudor Dumitraş

Tudor Dumitraş

Alert button

Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes

Add code
Bookmark button
Alert button
Nov 11, 2021
Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, Tudor Dumitraş

Figure 1 for Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Figure 2 for Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Figure 3 for Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Figure 4 for Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Viaarxiv icon

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

Add code
Bookmark button
Alert button
Oct 06, 2020
Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, Tudor Dumitraş

Figure 1 for A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Figure 2 for A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Figure 3 for A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Figure 4 for A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Viaarxiv icon

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

Add code
Bookmark button
Alert button
Feb 27, 2020
Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot

Figure 1 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Figure 2 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Figure 3 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Figure 4 for On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Viaarxiv icon

How to 0wn NAS in Your Spare Time

Add code
Bookmark button
Alert button
Feb 17, 2020
Sanghyun Hong, Michael Davinroy, Yiğitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş

Figure 1 for How to 0wn NAS in Your Spare Time
Figure 2 for How to 0wn NAS in Your Spare Time
Figure 3 for How to 0wn NAS in Your Spare Time
Figure 4 for How to 0wn NAS in Your Spare Time
Viaarxiv icon

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

Add code
Bookmark button
Alert button
Jun 03, 2019
Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, Tudor Dumitraş

Figure 1 for Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Figure 2 for Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Figure 3 for Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Figure 4 for Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Viaarxiv icon

Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks

Add code
Bookmark button
Alert button
Oct 08, 2018
Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, Tudor Dumitraş

Figure 1 for Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Figure 2 for Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Figure 3 for Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Figure 4 for Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Viaarxiv icon

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

Add code
Bookmark button
Alert button
Mar 19, 2018
Octavian Suciu, Radu Mărginean, Yiğitcan Kaya, Hal Daumé III, Tudor Dumitraş

Figure 1 for When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Figure 2 for When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Figure 3 for When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Figure 4 for When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Viaarxiv icon

Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning

Add code
Bookmark button
Alert button
Jan 17, 2017
Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael Hicks, Tudor Dumitraş

Figure 1 for Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Figure 2 for Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Figure 3 for Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Figure 4 for Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning
Viaarxiv icon