Alert button
Picture for Nils Lukas

Nils Lukas

Alert button

Universal Backdoor Attacks

Add code
Bookmark button
Alert button
Nov 30, 2023
Benjamin Schneider, Nils Lukas, Florian Kerschbaum

Viaarxiv icon

Leveraging Optimization for Adaptive Attacks on Image Watermarks

Add code
Bookmark button
Alert button
Sep 29, 2023
Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, Florian Kerschbaum

Figure 1 for Leveraging Optimization for Adaptive Attacks on Image Watermarks
Figure 2 for Leveraging Optimization for Adaptive Attacks on Image Watermarks
Figure 3 for Leveraging Optimization for Adaptive Attacks on Image Watermarks
Figure 4 for Leveraging Optimization for Adaptive Attacks on Image Watermarks
Viaarxiv icon

Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions

Add code
Bookmark button
Alert button
Jun 14, 2023
Abdulrahman Diaa, Lucas Fenaux, Thomas Humphries, Marian Dietz, Faezeh Ebrahimianghazani, Bailey Kacsmar, Xinda Li, Nils Lukas, Rasoul Akhavan Mahdavi, Simon Oya, Ehsan Amjadian, Florian Kerschbaum

Figure 1 for Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions
Figure 2 for Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions
Figure 3 for Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions
Figure 4 for Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions
Viaarxiv icon

Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification

Add code
Bookmark button
Alert button
May 07, 2023
Nils Lukas, Florian Kerschbaum

Figure 1 for Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification
Figure 2 for Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification
Figure 3 for Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification
Figure 4 for Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks against Deep Image Classification
Viaarxiv icon

PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators

Add code
Bookmark button
Alert button
Apr 14, 2023
Nils Lukas, Florian Kerschbaum

Figure 1 for PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
Figure 2 for PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
Figure 3 for PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
Figure 4 for PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
Viaarxiv icon

Analyzing Leakage of Personally Identifiable Information in Language Models

Add code
Bookmark button
Alert button
Feb 01, 2023
Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin

Figure 1 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 2 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 3 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 4 for Analyzing Leakage of Personally Identifiable Information in Language Models
Viaarxiv icon

SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)

Add code
Bookmark button
Alert button
Aug 11, 2021
Nils Lukas, Edward Jiang, Xinda Li, Florian Kerschbaum

Figure 1 for SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Figure 2 for SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Figure 3 for SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Figure 4 for SoK: How Robust is Image Classification Deep Neural Network Watermarking? (Extended Version)
Viaarxiv icon

Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

Add code
Bookmark button
Alert button
Dec 02, 2019
Nils Lukas, Yuxuan Zhang, Florian Kerschbaum

Figure 1 for Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Figure 2 for Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Figure 3 for Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Figure 4 for Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Viaarxiv icon

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 18, 2019
Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Florian Kerschbaum

Figure 1 for On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks
Figure 2 for On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks
Figure 3 for On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks
Figure 4 for On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks
Viaarxiv icon