Preference Optimization via Contrastive Divergence: Your Reward Model is Secretly an NLL Estimator

Add code
Feb 06, 2025
Figure 1 for Preference Optimization via Contrastive Divergence: Your Reward Model is Secretly an NLL Estimator
Figure 2 for Preference Optimization via Contrastive Divergence: Your Reward Model is Secretly an NLL Estimator
Figure 3 for Preference Optimization via Contrastive Divergence: Your Reward Model is Secretly an NLL Estimator
Figure 4 for Preference Optimization via Contrastive Divergence: Your Reward Model is Secretly an NLL Estimator

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: