Picture for Timothy Baldwin

Timothy Baldwin

Faithfulness-Aware Uncertainty Quantification for Fact-Checking the Output of Retrieval Augmented Generation

Add code
May 28, 2025
Viaarxiv icon

Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs

Add code
May 26, 2025
Viaarxiv icon

A Head to Predict and a Head to Question: Pre-trained Uncertainty Quantification Heads for Hallucination Detection in LLM Outputs

Add code
May 13, 2025
Viaarxiv icon

Analysis of Emotion in Rumour Threads on Social Media

Add code
Feb 23, 2025
Viaarxiv icon

Control Illusion: The Failure of Instruction Hierarchies in Large Language Models

Add code
Feb 21, 2025
Viaarxiv icon

Token-Level Density-Based Uncertainty Quantification Methods for Eliciting Truthfulness of Large Language Models

Add code
Feb 20, 2025
Viaarxiv icon

SCALAR: Scientific Citation-based Live Assessment of Long-context Academic Reasoning

Add code
Feb 19, 2025
Viaarxiv icon

Qorgau: Evaluating LLM Safety in Kazakh-Russian Bilingual Contexts

Add code
Feb 19, 2025
Viaarxiv icon

RuozhiBench: Evaluating LLMs with Logical Fallacies and Misleading Premises

Add code
Feb 18, 2025
Viaarxiv icon

Balanced Multi-Factor In-Context Learning for Multilingual Large Language Models

Add code
Feb 17, 2025
Viaarxiv icon