Picture for Asja Fischer

Asja Fischer

DepthKV: Layer-Dependent KV Cache Pruning for Long-Context LLM Inference

Add code
Apr 27, 2026
Viaarxiv icon

Revisiting Neural Activation Coverage for Uncertainty Estimation

Add code
Apr 24, 2026
Viaarxiv icon

On the Robustness of Watermarking for Autoregressive Image Generation

Add code
Apr 13, 2026
Viaarxiv icon

Precision-Varying Prediction (PVP): Robustifying ASR systems against adversarial attacks

Add code
Mar 23, 2026
Viaarxiv icon

SAMSEM -- A Generic and Scalable Approach for IC Metal Line Segmentation

Add code
Mar 17, 2026
Viaarxiv icon

Towards an Optimal Control Perspective of ResNet Training

Add code
Jun 26, 2025
Figure 1 for Towards an Optimal Control Perspective of ResNet Training
Figure 2 for Towards an Optimal Control Perspective of ResNet Training
Figure 3 for Towards an Optimal Control Perspective of ResNet Training
Figure 4 for Towards an Optimal Control Perspective of ResNet Training
Viaarxiv icon

RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image Detectors

Add code
Jun 09, 2025
Viaarxiv icon

Security Benefits and Side Effects of Labeling AI-Generated Images

Add code
May 28, 2025
Figure 1 for Security Benefits and Side Effects of Labeling AI-Generated Images
Figure 2 for Security Benefits and Side Effects of Labeling AI-Generated Images
Figure 3 for Security Benefits and Side Effects of Labeling AI-Generated Images
Figure 4 for Security Benefits and Side Effects of Labeling AI-Generated Images
Viaarxiv icon

Towards A Correct Usage of Cryptography in Semantic Watermarks for Diffusion Models

Add code
Mar 14, 2025
Viaarxiv icon

Can LLMs Explain Themselves Counterfactually?

Add code
Feb 25, 2025
Figure 1 for Can LLMs Explain Themselves Counterfactually?
Figure 2 for Can LLMs Explain Themselves Counterfactually?
Figure 3 for Can LLMs Explain Themselves Counterfactually?
Figure 4 for Can LLMs Explain Themselves Counterfactually?
Viaarxiv icon