Diffusion Models


Diffusion models are a class of generative models that learn the probability distribution of data by iteratively applying a series of transformations to a simple base distribution. They have been used in various applications, including image generation, text generation, and density estimation.

Mind the Generative Details: Direct Localized Detail Preference Optimization for Video Diffusion Models

Add code
Jan 08, 2026
Viaarxiv icon

Measurement-Consistent Langevin Corrector: A Remedy for Latent Diffusion Inverse Solvers

Add code
Jan 08, 2026
Viaarxiv icon

Spatial-Temporal Feedback Diffusion Guidance for Controlled Traffic Imputation

Add code
Jan 08, 2026
Viaarxiv icon

PyramidalWan: On Making Pretrained Video Model Pyramidal for Efficient Inference

Add code
Jan 08, 2026
Viaarxiv icon

VerseCrafter: Dynamic Realistic Video World Model with 4D Geometric Control

Add code
Jan 08, 2026
Viaarxiv icon

FlowLet: Conditional 3D Brain MRI Synthesis using Wavelet Flow Matching

Add code
Jan 08, 2026
Viaarxiv icon

Agentic Retoucher for Text-To-Image Generation

Add code
Jan 08, 2026
Viaarxiv icon

Towards Spatio-Temporal Extrapolation of Phase-Field Simulations with Convolution-Only Neural Networks

Add code
Jan 08, 2026
Viaarxiv icon

Inference Attacks Against Graph Generative Diffusion Models

Add code
Jan 07, 2026
Viaarxiv icon

Beyond Binary Preference: Aligning Diffusion Models to Fine-grained Criteria by Decoupling Attributes

Add code
Jan 07, 2026
Viaarxiv icon