Picture for Geoffrey Irving

Geoffrey Irving

How to evaluate control measures for LLM agents? A trajectory from today to superintelligence

Add code
Apr 07, 2025
Viaarxiv icon

A sketch of an AI control safety case

Add code
Jan 28, 2025
Figure 1 for A sketch of an AI control safety case
Figure 2 for A sketch of an AI control safety case
Figure 3 for A sketch of an AI control safety case
Figure 4 for A sketch of an AI control safety case
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Scalable AI Safety via Doubly-Efficient Debate

Add code
Nov 23, 2023
Viaarxiv icon

Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla

Add code
Jul 24, 2023
Figure 1 for Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Figure 2 for Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Figure 3 for Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Figure 4 for Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Viaarxiv icon

Accelerating Large Language Model Decoding with Speculative Sampling

Add code
Feb 02, 2023
Viaarxiv icon

Solving math word problems with process- and outcome-based feedback

Add code
Nov 25, 2022
Figure 1 for Solving math word problems with process- and outcome-based feedback
Figure 2 for Solving math word problems with process- and outcome-based feedback
Figure 3 for Solving math word problems with process- and outcome-based feedback
Figure 4 for Solving math word problems with process- and outcome-based feedback
Viaarxiv icon

Fine-Tuning Language Models via Epistemic Neural Networks

Add code
Nov 03, 2022
Figure 1 for Fine-Tuning Language Models via Epistemic Neural Networks
Figure 2 for Fine-Tuning Language Models via Epistemic Neural Networks
Figure 3 for Fine-Tuning Language Models via Epistemic Neural Networks
Figure 4 for Fine-Tuning Language Models via Epistemic Neural Networks
Viaarxiv icon

Improving alignment of dialogue agents via targeted human judgements

Add code
Sep 28, 2022
Figure 1 for Improving alignment of dialogue agents via targeted human judgements
Figure 2 for Improving alignment of dialogue agents via targeted human judgements
Figure 3 for Improving alignment of dialogue agents via targeted human judgements
Figure 4 for Improving alignment of dialogue agents via targeted human judgements
Viaarxiv icon

Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models

Add code
Jun 16, 2022
Figure 1 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Figure 2 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Figure 3 for Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Viaarxiv icon