Picture for Owain Evans

Owain Evans

Shammie

School of Reward Hacks: Hacking harmless tasks generalizes to misaligned behavior in LLMs

Add code
Aug 24, 2025
Viaarxiv icon

Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety

Add code
Jul 15, 2025
Viaarxiv icon

Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models

Add code
Jun 16, 2025
Viaarxiv icon

Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

Add code
Feb 25, 2025
Viaarxiv icon

Tell me about yourself: LLMs are aware of their learned behaviors

Add code
Jan 19, 2025
Viaarxiv icon

Inference-Time-Compute: More Faithful? A Research Note

Add code
Jan 14, 2025
Viaarxiv icon

The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C

Add code
Nov 25, 2024
Figure 1 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Figure 2 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Figure 3 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Figure 4 for The Two-Hop Curse: LLMs trained on A->B, B->C fail to learn A-->C
Viaarxiv icon

Towards evaluations-based safety cases for AI scheming

Add code
Nov 07, 2024
Viaarxiv icon

Looking Inward: Language Models Can Learn About Themselves by Introspection

Add code
Oct 17, 2024
Viaarxiv icon

Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs

Add code
Jul 05, 2024
Figure 1 for Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
Figure 2 for Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
Figure 3 for Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
Figure 4 for Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
Viaarxiv icon