Picture for Elizabeth Daly

Elizabeth Daly

FactCorrector: A Graph-Inspired Approach to Long-Form Factuality Correction of Large Language Models

Add code
Jan 16, 2026
Viaarxiv icon

Shapelets-Enriched Selective Forecasting using Time Series Foundation Models

Add code
Jan 16, 2026
Viaarxiv icon

Localizing Persona Representations in LLMs

Add code
May 30, 2025
Viaarxiv icon

Humble AI in the real-world: the case of algorithmic hiring

Add code
May 27, 2025
Viaarxiv icon

FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models

Add code
Feb 25, 2025
Figure 1 for FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models
Figure 2 for FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models
Figure 3 for FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models
Figure 4 for FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models
Viaarxiv icon

BenchmarkCards: Large Language Model and Risk Reporting

Add code
Oct 16, 2024
Figure 1 for BenchmarkCards: Large Language Model and Risk Reporting
Figure 2 for BenchmarkCards: Large Language Model and Risk Reporting
Figure 3 for BenchmarkCards: Large Language Model and Risk Reporting
Viaarxiv icon

WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia

Add code
Jun 19, 2024
Figure 1 for WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
Figure 2 for WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
Figure 3 for WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
Figure 4 for WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
Viaarxiv icon

Ranking Large Language Models without Ground Truth

Add code
Feb 21, 2024
Figure 1 for Ranking Large Language Models without Ground Truth
Figure 2 for Ranking Large Language Models without Ground Truth
Figure 3 for Ranking Large Language Models without Ground Truth
Figure 4 for Ranking Large Language Models without Ground Truth
Viaarxiv icon

Explaining Knock-on Effects of Bias Mitigation

Add code
Dec 01, 2023
Viaarxiv icon

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

Add code
Aug 30, 2023
Figure 1 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 2 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 3 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 4 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Viaarxiv icon