Picture for Weikang Bian

Weikang Bian

MMEmb-R1: Reasoning-Enhanced Multimodal Embedding with Pair-Aware Selection and Adaptive Control

Add code
Apr 07, 2026
Viaarxiv icon

ReinDriveGen: Reinforcement Post-Training for Out-of-Distribution Driving Scene Generation

Add code
Apr 01, 2026
Viaarxiv icon

RelightMaster: Precise Video Relighting with Multi-plane Light Images

Add code
Nov 09, 2025
Viaarxiv icon

GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking

Add code
Jan 05, 2025
Figure 1 for GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking
Figure 2 for GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking
Figure 3 for GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking
Figure 4 for GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields through Efficient Dense 3D Point Tracking
Viaarxiv icon

A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding

Add code
Nov 04, 2024
Figure 1 for A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding
Figure 2 for A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding
Figure 3 for A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding
Figure 4 for A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding
Viaarxiv icon

BlinkVision: A Benchmark for Optical Flow, Scene Flow and Point Tracking Estimation using RGB Frames and Events

Add code
Oct 27, 2024
Viaarxiv icon

Phased Consistency Model

Add code
May 28, 2024
Viaarxiv icon

AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning

Add code
Feb 01, 2024
Figure 1 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Figure 2 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Figure 3 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Figure 4 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Viaarxiv icon

Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling

Add code
Jan 31, 2024
Figure 1 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Figure 2 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Figure 3 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Figure 4 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Viaarxiv icon

Context-TAP: Tracking Any Point Demands Spatial Context Features

Add code
Jun 03, 2023
Figure 1 for Context-TAP: Tracking Any Point Demands Spatial Context Features
Figure 2 for Context-TAP: Tracking Any Point Demands Spatial Context Features
Figure 3 for Context-TAP: Tracking Any Point Demands Spatial Context Features
Figure 4 for Context-TAP: Tracking Any Point Demands Spatial Context Features
Viaarxiv icon