Picture for Zedong Wang

Zedong Wang

CARE-Edit: Condition-Aware Routing of Experts for Contextual Image Editing

Add code
Mar 09, 2026
Viaarxiv icon

Steady-State Behavior of Constant-Stepsize Stochastic Approximation: Gaussian Approximation and Tail Bounds

Add code
Feb 15, 2026
Viaarxiv icon

Quantifying Normality: Convergence Rate to Gaussian Limit for Stochastic Approximation and Unadjusted OU Algorithm

Add code
Feb 14, 2026
Viaarxiv icon

Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

Add code
Jul 28, 2025
Viaarxiv icon

MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization

Add code
Apr 01, 2025
Viaarxiv icon

Prior-guided Hierarchical Harmonization Network for Efficient Image Dehazing

Add code
Mar 03, 2025
Viaarxiv icon

Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning

Add code
Oct 08, 2024
Figure 1 for Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning
Figure 2 for Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning
Figure 3 for Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning
Figure 4 for Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning
Viaarxiv icon

A Survey on Mixup Augmentations and Beyond

Add code
Sep 08, 2024
Figure 1 for A Survey on Mixup Augmentations and Beyond
Figure 2 for A Survey on Mixup Augmentations and Beyond
Figure 3 for A Survey on Mixup Augmentations and Beyond
Figure 4 for A Survey on Mixup Augmentations and Beyond
Viaarxiv icon

Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences

Add code
Jun 12, 2024
Figure 1 for Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Figure 2 for Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Figure 3 for Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Figure 4 for Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Viaarxiv icon

VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling

Add code
May 13, 2024
Viaarxiv icon