Picture for Nigel Collier

Nigel Collier

Department of Language Engineering, UMIST, UK

Confident Rankings with Fewer Items: Adaptive LLM Evaluation with Continuous Scores

Add code
Jan 20, 2026
Viaarxiv icon

Failure Modes in Multi-Hop QA: The Weakest Link Law and the Recognition Bottleneck

Add code
Jan 18, 2026
Viaarxiv icon

Steer Model beyond Assistant: Controlling System Prompt Strength via Contrastive Decoding

Add code
Jan 10, 2026
Viaarxiv icon

Value of Information: A Framework for Human-Agent Communication

Add code
Jan 10, 2026
Viaarxiv icon

Confidence Estimation for LLMs in Multi-turn Interactions

Add code
Jan 05, 2026
Viaarxiv icon

All Roads Lead to Rome: Graph-Based Confidence Estimation for Large Language Model Reasoning

Add code
Sep 16, 2025
Viaarxiv icon

A Survey on Prompt Tuning

Add code
Jul 09, 2025
Viaarxiv icon

Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation

Add code
May 29, 2025
Figure 1 for Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation
Figure 2 for Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation
Figure 3 for Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation
Figure 4 for Reinforcement Learning for Better Verbalized Confidence in Long-Form Generation
Viaarxiv icon

UNCLE: Uncertainty Expressions in Long-Form Generation

Add code
May 22, 2025
Viaarxiv icon

PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning

Add code
May 14, 2025
Figure 1 for PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning
Figure 2 for PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning
Figure 3 for PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning
Figure 4 for PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning
Viaarxiv icon