Picture for Yee Whye Teh

Yee Whye Teh

University College London

Manifold Aware Denoising Score Matching (MAD)

Add code
Mar 02, 2026
Viaarxiv icon

SkillCraft: Can LLM Agents Learn to Use Tools Skillfully?

Add code
Feb 28, 2026
Viaarxiv icon

Are We Evaluating the Edit Locality of LLM Model Editing Properly?

Add code
Jan 24, 2026
Viaarxiv icon

Meta Flow Maps enable scalable reward alignment

Add code
Jan 20, 2026
Viaarxiv icon

SigmaDock: Untwisting Molecular Docking With Fragment-Based SE(3) Diffusion

Add code
Nov 06, 2025
Viaarxiv icon

Enhancing Large Language Model Reasoning with Reward Models: An Analytical Survey

Add code
Oct 02, 2025
Viaarxiv icon

Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation

Add code
Oct 01, 2025
Figure 1 for Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation
Figure 2 for Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation
Figure 3 for Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation
Figure 4 for Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation
Viaarxiv icon

Rao-Blackwellised Reparameterisation Gradients

Add code
Jun 09, 2025
Figure 1 for Rao-Blackwellised Reparameterisation Gradients
Figure 2 for Rao-Blackwellised Reparameterisation Gradients
Figure 3 for Rao-Blackwellised Reparameterisation Gradients
Figure 4 for Rao-Blackwellised Reparameterisation Gradients
Viaarxiv icon

Extending Epistemic Uncertainty Beyond Parameters Would Assist in Designing Reliable LLMs

Add code
Jun 09, 2025
Viaarxiv icon

NoProp: Training Neural Networks without Back-propagation or Forward-propagation

Add code
Mar 31, 2025
Figure 1 for NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Figure 2 for NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Figure 3 for NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Figure 4 for NoProp: Training Neural Networks without Back-propagation or Forward-propagation
Viaarxiv icon