Picture for Maksym Andriushchenko

Maksym Andriushchenko

Saarland University

Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs

Add code
Mar 25, 2026
Viaarxiv icon

PostTrainBench: Can LLM Agents Automate LLM Post-Training?

Add code
Mar 10, 2026
Viaarxiv icon

Skill-Inject: Measuring Agent Vulnerability to Skill File Attacks

Add code
Feb 25, 2026
Viaarxiv icon

Helpful to a Fault: Measuring Illicit Assistance in Multi-Turn, Multilingual LLM Agents

Add code
Feb 19, 2026
Viaarxiv icon

HalluHard: A Hard Multi-Turn Hallucination Benchmark

Add code
Feb 01, 2026
Viaarxiv icon

Agent Skills Enable a New Class of Realistic and Trivially Simple Prompt Injections

Add code
Oct 30, 2025
Viaarxiv icon

Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols

Add code
Oct 10, 2025
Figure 1 for Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols
Figure 2 for Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols
Figure 3 for Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols
Figure 4 for Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols
Viaarxiv icon

OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents

Add code
Jun 17, 2025
Viaarxiv icon

Monitoring Decomposition Attacks in LLMs with Lightweight Sequential Monitors

Add code
Jun 12, 2025
Figure 1 for Monitoring Decomposition Attacks in LLMs with Lightweight Sequential Monitors
Figure 2 for Monitoring Decomposition Attacks in LLMs with Lightweight Sequential Monitors
Figure 3 for Monitoring Decomposition Attacks in LLMs with Lightweight Sequential Monitors
Figure 4 for Monitoring Decomposition Attacks in LLMs with Lightweight Sequential Monitors
Viaarxiv icon

Capability-Based Scaling Laws for LLM Red-Teaming

Add code
May 26, 2025
Viaarxiv icon