Picture for Nathan Drenkow

Nathan Drenkow

A Systematic Review of Poisoning Attacks Against Large Language Models

Add code
Jun 06, 2025
Viaarxiv icon

Backdoors in DRL: Four Environments Focusing on In-distribution Triggers

Add code
May 22, 2025
Viaarxiv icon

Investigating the Treacherous Turn in Deep Reinforcement Learning

Add code
Apr 11, 2025
Viaarxiv icon

Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework

Add code
Mar 13, 2025
Viaarxiv icon

A Causal Framework for Aligning Image Quality Metrics and Deep Neural Network Robustness

Add code
Mar 04, 2025
Viaarxiv icon

Causality-Driven Audits of Model Robustness

Add code
Oct 30, 2024
Viaarxiv icon

From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments

Add code
Feb 28, 2024
Viaarxiv icon

RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in Object-centric Learning

Add code
Aug 28, 2023
Viaarxiv icon

Exploiting Large Neuroimaging Datasets to Create Connectome-Constrained Approaches for more Robust, Efficient, and Adaptable Artificial Intelligence

Add code
May 26, 2023
Viaarxiv icon

Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

Add code
Apr 06, 2023
Viaarxiv icon