Alert button
Picture for Thorsten Eisenhofer

Thorsten Eisenhofer

Alert button

On the Limitations of Model Stealing with Uncertainty Quantification Models

May 09, 2023
David Pape, Sina Däubener, Thorsten Eisenhofer, Antonio Emanuele Cinà, Lea Schönherr

Figure 1 for On the Limitations of Model Stealing with Uncertainty Quantification Models
Figure 2 for On the Limitations of Model Stealing with Uncertainty Quantification Models
Figure 3 for On the Limitations of Model Stealing with Uncertainty Quantification Models

Model stealing aims at inferring a victim model's functionality at a fraction of the original training cost. While the goal is clear, in practice the model's architecture, weight dimension, and original training data can not be determined exactly, leading to mutual uncertainty during stealing. In this work, we explicitly tackle this uncertainty by generating multiple possible networks and combining their predictions to improve the quality of the stolen model. For this, we compare five popular uncertainty quantification models in a model stealing task. Surprisingly, our results indicate that the considered models only lead to marginal improvements in terms of label agreement (i.e., fidelity) to the stolen model. To find the cause of this, we inspect the diversity of the model's prediction by looking at the prediction variance as a function of training iterations. We realize that during training, the models tend to have similar predictions, indicating that the network diversity we wanted to leverage using uncertainty quantification models is not (high) enough for improvements on the model stealing task.

* 6 pages, 1 figure, 2 table, paper submitted to European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 
Viaarxiv icon

No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning

Mar 25, 2023
Thorsten Eisenhofer, Erwin Quiring, Jonas Möller, Doreen Riepel, Thorsten Holz, Konrad Rieck

Figure 1 for No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Figure 2 for No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Figure 3 for No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Figure 4 for No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning

The number of papers submitted to academic conferences is steadily rising in many scientific disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems use statistical topic models to characterize the content of submissions and automate the assignment to reviewers. In this paper, we show that this automation can be manipulated using adversarial learning. We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers. Our attack is based on a novel optimization strategy that alternates between the feature space and problem space to realize unobtrusive changes to the paper. To evaluate the feasibility of our attack, we simulate the paper-reviewer assignment of an actual security conference (IEEE S&P) with 165 reviewers on the program committee. Our results show that we can successfully select and remove reviewers without access to the assignment system. Moreover, we demonstrate that the manipulated papers remain plausible and are often indistinguishable from benign submissions.

* Accepted at USENIX Security Symposium 2023 
Viaarxiv icon

Learned Systems Security

Jan 10, 2023
Roei Schuster, Jin Peng Zhou, Thorsten Eisenhofer, Paul Grubbs, Nicolas Papernot

Figure 1 for Learned Systems Security
Figure 2 for Learned Systems Security
Figure 3 for Learned Systems Security
Figure 4 for Learned Systems Security

A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.

Viaarxiv icon

Verifiable and Provably Secure Machine Unlearning

Oct 17, 2022
Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot

Figure 1 for Verifiable and Provably Secure Machine Unlearning
Figure 2 for Verifiable and Provably Secure Machine Unlearning
Figure 3 for Verifiable and Provably Secure Machine Unlearning
Figure 4 for Verifiable and Provably Secure Machine Unlearning

Machine unlearning aims to remove points from the training dataset of a machine learning model after training; for example when a user requests their data to be deleted. While many machine unlearning methods have been proposed, none of them enable users to audit the unlearning procedure and verify that their data was indeed unlearned. To address this, we define the first cryptographic framework to formally capture the security of verifiable machine unlearning. While our framework is generally applicable to different approaches, its advantages are perhaps best illustrated by our instantiation for the canonical approach to unlearning: retraining the model without the data to be unlearned. In our cryptographic protocol, the server first computes a proof that the model was trained on a dataset~$D$. Given a user data point $d$, the server then computes a proof of unlearning that shows that $d \notin D$. We realize our protocol using a SNARK and Merkle trees to obtain proofs of update and unlearning on the data. Based on cryptographic assumptions, we then present a formal game-based proof that our instantiation is secure. Finally, we validate the practicality of our constructions for unlearning in linear regression, logistic regression, and neural networks.

Viaarxiv icon

Dompteur: Taming Audio Adversarial Examples

Feb 10, 2021
Thorsten Eisenhofer, Lea Schönherr, Joel Frank, Lars Speckemeier, Dorothea Kolossa, Thorsten Holz

Figure 1 for Dompteur: Taming Audio Adversarial Examples
Figure 2 for Dompteur: Taming Audio Adversarial Examples
Figure 3 for Dompteur: Taming Audio Adversarial Examples
Figure 4 for Dompteur: Taming Audio Adversarial Examples

Adversarial examples seem to be inevitable. These specifically crafted inputs allow attackers to arbitrarily manipulate machine learning systems. Even worse, they often seem harmless to human observers. In our digital society, this poses a significant threat. For example, Automatic Speech Recognition (ASR) systems, which serve as hands-free interfaces to many kinds of systems, can be attacked with inputs incomprehensible for human listeners. The research community has unsuccessfully tried several approaches to tackle this problem. In this paper we propose a different perspective: We accept the presence of adversarial examples against ASR systems, but we require them to be perceivable by human listeners. By applying the principles of psychoacoustics, we can remove semantically irrelevant information from the ASR input and train a model that resembles human perception more closely. We implement our idea in a tool named Dompteur and demonstrate that our augmented system, in contrast to an unmodified baseline, successfully focuses on perceptible ranges of the input signal. This change forces adversarial examples into the audible range, while using minimal computational overhead and preserving benign performance. To evaluate our approach, we construct an adaptive attacker, which actively tries to avoid our augmentations and demonstrate that adversarial examples from this attacker remain clearly perceivable. Finally, we substantiate our claims by performing a hearing test with crowd-sourced human listeners.

Viaarxiv icon

VENOMAVE: Clean-Label Poisoning Against Speech Recognition

Oct 21, 2020
Hojjat Aghakhani, Thorsten Eisenhofer, Lea Schönherr, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna

Figure 1 for VENOMAVE: Clean-Label Poisoning Against Speech Recognition
Figure 2 for VENOMAVE: Clean-Label Poisoning Against Speech Recognition
Figure 3 for VENOMAVE: Clean-Label Poisoning Against Speech Recognition
Figure 4 for VENOMAVE: Clean-Label Poisoning Against Speech Recognition

In the past few years, we observed a wide adoption of practical systems that use Automatic Speech Recognition (ASR) systems to improve human-machine interaction. Modern ASR systems are based on neural networks and prior research demonstrated that these systems are susceptible to adversarial examples, i.e., malicious audio inputs that lead to misclassification by the victim's network during the system's run time. The research question if ASR systems are also vulnerable to data poisoning attacks is still unanswered. In such an attack, a manipulation happens during the training phase of the neural network: an adversary injects malicious inputs into the training set such that the neural network's integrity and performance are compromised. In this paper, we present the first data poisoning attack in the audio domain, called VENOMAVE. Prior work in the image domain demonstrated several types of data poisoning attacks, but they cannot be applied to the audio domain. The main challenge is that we need to attack a time series of inputs. To enforce a targeted misclassification in an ASR system, we need to carefully generate a specific sequence of disturbed inputs for the target utterance, which will eventually be decoded to the desired sequence of words. More specifically, the adversarial goal is to produce a series of misclassification tasks and in each of them, we need to poison the system to misrecognize each frame of the target file. To demonstrate the practical feasibility of our attack, we evaluate VENOMAVE on an ASR system that detects sequences of digits from 0 to 9. When poisoning only 0.94% of the dataset on average, we achieve an attack success rate of 83.33%. We conclude that data poisoning attacks against ASR systems represent a real threat that needs to be considered.

Viaarxiv icon

Leveraging Frequency Analysis for Deep Fake Image Recognition

Mar 20, 2020
Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz

Figure 1 for Leveraging Frequency Analysis for Deep Fake Image Recognition
Figure 2 for Leveraging Frequency Analysis for Deep Fake Image Recognition
Figure 3 for Leveraging Frequency Analysis for Deep Fake Image Recognition
Figure 4 for Leveraging Frequency Analysis for Deep Fake Image Recognition

Deep neural networks can generate images that are astonishingly realistic, so much so that it is often hard for humans to distinguish them from actual photos. These achievements have been largely made possible by Generative Adversarial Networks (GANs). While these deep fake images have been thoroughly investigated in the image domain-a classical approach from the area of image forensics-an analysis in the frequency domain has been missing so far. In this paper, we address this shortcoming and our results reveal that in frequency space, GAN-generated images exhibit severe artifacts that can be easily identified. We perform a comprehensive analysis, showing that these artifacts are consistent across different neural network architectures, data sets, and resolutions. In a further investigation, we demonstrate that these artifacts are caused by upsampling operations found in all current GAN architectures, indicating a structural and fundamental problem in the way images are generated via GANs. Based on this analysis, we demonstrate how the frequency representation can be used to identify deep fake images in an automated way, surpassing state-of-the-art methods.

* For accompanying code see https://github.com/RUB-SysSec/GANDCTAnalysis. 13 pages, 7 figures 
Viaarxiv icon