Picture for Kathrin Grosse

Kathrin Grosse

I Stolenly Swear That I Am Up to (No) Good: Design and Evaluation of Model Stealing Attacks

Add code
Aug 29, 2025
Viaarxiv icon

Design Patterns for Securing LLM Agents against Prompt Injections

Add code
Jun 11, 2025
Figure 1 for Design Patterns for Securing LLM Agents against Prompt Injections
Figure 2 for Design Patterns for Securing LLM Agents against Prompt Injections
Figure 3 for Design Patterns for Securing LLM Agents against Prompt Injections
Figure 4 for Design Patterns for Securing LLM Agents against Prompt Injections
Viaarxiv icon

Manipulating Trajectory Prediction with Backdoors

Add code
Jan 03, 2024
Figure 1 for Manipulating Trajectory Prediction with Backdoors
Figure 2 for Manipulating Trajectory Prediction with Backdoors
Figure 3 for Manipulating Trajectory Prediction with Backdoors
Figure 4 for Manipulating Trajectory Prediction with Backdoors
Viaarxiv icon

Towards more Practical Threat Models in Artificial Intelligence Security

Add code
Nov 16, 2023
Viaarxiv icon

A Survey on Reinforcement Learning Security with Application to Autonomous Driving

Add code
Dec 12, 2022
Viaarxiv icon

"Why do so?" -- A Practical Perspective on Machine Learning Security

Add code
Jul 11, 2022
Figure 1 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Figure 2 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Figure 3 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Figure 4 for "Why do so?" -- A Practical Perspective on Machine Learning Security
Viaarxiv icon

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

Add code
May 04, 2022
Figure 1 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 2 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 3 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Figure 4 for Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Viaarxiv icon

Machine Learning Security against Data Poisoning: Are We There Yet?

Add code
Apr 12, 2022
Figure 1 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 2 for Machine Learning Security against Data Poisoning: Are We There Yet?
Figure 3 for Machine Learning Security against Data Poisoning: Are We There Yet?
Viaarxiv icon

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

Add code
Jun 14, 2021
Figure 1 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 2 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 3 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Figure 4 for Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions
Viaarxiv icon

Mental Models of Adversarial Machine Learning

Add code
May 08, 2021
Figure 1 for Mental Models of Adversarial Machine Learning
Figure 2 for Mental Models of Adversarial Machine Learning
Figure 3 for Mental Models of Adversarial Machine Learning
Figure 4 for Mental Models of Adversarial Machine Learning
Viaarxiv icon