Picture for Tobias Ladner

Tobias Ladner

Formally Verifying Analog Neural Networks Under Process Variations Using Polynomial Zonotopes

Add code
May 11, 2026
Viaarxiv icon

Set-Based Training of Neural Barrier Certificates for Safety Verification of Dynamical Systems

Add code
May 04, 2026
Viaarxiv icon

Provably Explaining Neural Additive Models

Add code
Feb 19, 2026
Viaarxiv icon

Perception with Guarantees: Certified Pose Estimation via Reachability Analysis

Add code
Feb 10, 2026
Viaarxiv icon

The 6th International Verification of Neural Networks Competition (VNN-COMP 2025): Summary and Results

Add code
Dec 22, 2025
Viaarxiv icon

Abstraction-Based Proof Production in Formal Verification of Neural Networks

Add code
Jun 11, 2025
Figure 1 for Abstraction-Based Proof Production in Formal Verification of Neural Networks
Figure 2 for Abstraction-Based Proof Production in Formal Verification of Neural Networks
Figure 3 for Abstraction-Based Proof Production in Formal Verification of Neural Networks
Figure 4 for Abstraction-Based Proof Production in Formal Verification of Neural Networks
Viaarxiv icon

Explaining, Fast and Slow: Abstraction and Refinement of Provable Explanations

Add code
Jun 10, 2025
Viaarxiv icon

Out of the Shadows: Exploring a Latent Space for Neural Network Verification

Add code
May 23, 2025
Viaarxiv icon

Language Models That Walk the Talk: A Framework for Formal Fairness Certificates

Add code
May 19, 2025
Viaarxiv icon

Training Verifiably Robust Agents Using Set-Based Reinforcement Learning

Add code
Aug 17, 2024
Figure 1 for Training Verifiably Robust Agents Using Set-Based Reinforcement Learning
Figure 2 for Training Verifiably Robust Agents Using Set-Based Reinforcement Learning
Figure 3 for Training Verifiably Robust Agents Using Set-Based Reinforcement Learning
Figure 4 for Training Verifiably Robust Agents Using Set-Based Reinforcement Learning
Viaarxiv icon