Picture for Aaron J. Li

Aaron J. Li

More RLHF, More Trust? On The Impact of Human Preference Alignment On Language Model Trustworthiness

Add code
Apr 29, 2024
Viaarxiv icon