Picture for Natalie Mackraz

Natalie Mackraz

Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs

Add code
May 29, 2025
Viaarxiv icon

Aligning LLMs by Predicting Preferences from User Writing Samples

Add code
May 27, 2025
Viaarxiv icon

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

Add code
Dec 04, 2024
Viaarxiv icon

PREDICT: Preference Reasoning by Evaluating Decomposed preferences Inferred from Candidate Trajectories

Add code
Oct 08, 2024
Viaarxiv icon

Sample-Efficient Preference-based Reinforcement Learning with Dynamics Aware Rewards

Add code
Feb 28, 2024
Viaarxiv icon

Large Language Models as Generalizable Policies for Embodied Tasks

Add code
Oct 26, 2023
Figure 1 for Large Language Models as Generalizable Policies for Embodied Tasks
Figure 2 for Large Language Models as Generalizable Policies for Embodied Tasks
Figure 3 for Large Language Models as Generalizable Policies for Embodied Tasks
Figure 4 for Large Language Models as Generalizable Policies for Embodied Tasks
Viaarxiv icon