Picture for Wei-Fang Sun

Wei-Fang Sun

MDM-Prime-v2: Binary Encoding and Index Shuffling Enable Compute-optimal Scaling of Diffusion Language Models

Add code
Mar 17, 2026
Viaarxiv icon

LOBE-GS: Load-Balanced and Efficient 3D Gaussian Splatting for Large-Scale Scene Reconstruction

Add code
Oct 02, 2025
Viaarxiv icon

Beyond Masked and Unmasked: Discrete Diffusion Models via Partial Masking

Add code
May 24, 2025
Viaarxiv icon

Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering

Add code
Oct 11, 2024
Figure 1 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Figure 2 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Figure 3 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Figure 4 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Viaarxiv icon

Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow

Add code
May 22, 2024
Figure 1 for Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow
Figure 2 for Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow
Figure 3 for Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow
Figure 4 for Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow
Viaarxiv icon

DriveEnv-NeRF: Exploration of A NeRF-Based Autonomous Driving Environment for Real-World Performance Validation

Add code
Mar 23, 2024
Figure 1 for DriveEnv-NeRF: Exploration of A NeRF-Based Autonomous Driving Environment for Real-World Performance Validation
Figure 2 for DriveEnv-NeRF: Exploration of A NeRF-Based Autonomous Driving Environment for Real-World Performance Validation
Figure 3 for DriveEnv-NeRF: Exploration of A NeRF-Based Autonomous Driving Environment for Real-World Performance Validation
Figure 4 for DriveEnv-NeRF: Exploration of A NeRF-Based Autonomous Driving Environment for Real-World Performance Validation
Viaarxiv icon

Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning

Add code
Feb 01, 2024
Figure 1 for Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning
Figure 2 for Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning
Figure 3 for Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning
Figure 4 for Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning
Viaarxiv icon

A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning

Add code
Jun 04, 2023
Figure 1 for A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
Figure 2 for A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
Figure 3 for A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
Figure 4 for A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning
Viaarxiv icon

Training Energy-Based Normalizing Flow with Score-Matching Objectives

Add code
May 24, 2023
Figure 1 for Training Energy-Based Normalizing Flow with Score-Matching Objectives
Figure 2 for Training Energy-Based Normalizing Flow with Score-Matching Objectives
Figure 3 for Training Energy-Based Normalizing Flow with Score-Matching Objectives
Figure 4 for Training Energy-Based Normalizing Flow with Score-Matching Objectives
Viaarxiv icon

Quasi-Conservative Score-based Generative Models

Add code
Sep 26, 2022
Figure 1 for Quasi-Conservative Score-based Generative Models
Figure 2 for Quasi-Conservative Score-based Generative Models
Figure 3 for Quasi-Conservative Score-based Generative Models
Figure 4 for Quasi-Conservative Score-based Generative Models
Viaarxiv icon