Picture for Owain Evans

Owain Evans

Shammie

Activation Oracles: Training and Evaluating LLMs as General-Purpose Activation Explainers

Add code
Dec 17, 2025
Viaarxiv icon

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs

Add code
Dec 10, 2025
Viaarxiv icon

School of Reward Hacks: Hacking harmless tasks generalizes to misaligned behavior in LLMs

Add code
Aug 24, 2025
Viaarxiv icon

Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety

Add code
Jul 15, 2025
Figure 1 for Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
Viaarxiv icon

Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models

Add code
Jun 16, 2025
Viaarxiv icon

Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

Add code
Feb 25, 2025
Viaarxiv icon

Tell me about yourself: LLMs are aware of their learned behaviors

Add code
Jan 19, 2025
Viaarxiv icon

Inference-Time-Compute: More Faithful? A Research Note

Add code
Jan 14, 2025
Viaarxiv icon

The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C

Add code
Nov 25, 2024
Figure 1 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Figure 2 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Figure 3 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Figure 4 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Viaarxiv icon

Towards evaluations-based safety cases for AI scheming

Add code
Nov 07, 2024
Figure 1 for Towards evaluations-based safety cases for AI scheming
Figure 2 for Towards evaluations-based safety cases for AI scheming
Figure 3 for Towards evaluations-based safety cases for AI scheming
Figure 4 for Towards evaluations-based safety cases for AI scheming
Viaarxiv icon