RLHF Fine-Tuning of LLMs for Alignment with Implicit User Feedback in Conversational Recommenders

Add code
Aug 07, 2025

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: