DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization

Add code
Feb 05, 2025
Figure 1 for DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization
Figure 2 for DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization
Figure 3 for DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization
Figure 4 for DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: