Picture for Kai Zhang

Kai Zhang

Victor

Foundation Models in Biomedical Imaging: Turning Hype into Reality

Add code
Dec 17, 2025
Figure 1 for Foundation Models in Biomedical Imaging: Turning Hype into Reality
Figure 2 for Foundation Models in Biomedical Imaging: Turning Hype into Reality
Figure 3 for Foundation Models in Biomedical Imaging: Turning Hype into Reality
Figure 4 for Foundation Models in Biomedical Imaging: Turning Hype into Reality
Viaarxiv icon

L-STEC: Learned Video Compression with Long-term Spatio-Temporal Enhanced Context

Add code
Dec 14, 2025
Viaarxiv icon

E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training

Add code
Dec 11, 2025
Figure 1 for E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training
Figure 2 for E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training
Figure 3 for E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training
Figure 4 for E-RayZer: Self-supervised 3D Reconstruction as Spatial Visual Pre-training
Viaarxiv icon

From Noise to Latent: Generating Gaussian Latents for INR-Based Image Compression

Add code
Nov 11, 2025
Viaarxiv icon

Scaling Agent Learning via Experience Synthesis

Add code
Nov 10, 2025
Figure 1 for Scaling Agent Learning via Experience Synthesis
Figure 2 for Scaling Agent Learning via Experience Synthesis
Figure 3 for Scaling Agent Learning via Experience Synthesis
Figure 4 for Scaling Agent Learning via Experience Synthesis
Viaarxiv icon

A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation

Add code
Oct 31, 2025
Figure 1 for A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation
Figure 2 for A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation
Figure 3 for A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation
Figure 4 for A Survey on Deep Text Hashing: Efficient Semantic Text Retrieval with Binary Representation
Viaarxiv icon

1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models

Add code
Oct 30, 2025
Figure 1 for 1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
Figure 2 for 1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
Figure 3 for 1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
Figure 4 for 1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
Viaarxiv icon

pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation

Add code
Oct 16, 2025
Viaarxiv icon

AdaSwitch: Adaptive Switching Generation for Knowledge Distillation

Add code
Oct 09, 2025
Figure 1 for AdaSwitch: Adaptive Switching Generation for Knowledge Distillation
Figure 2 for AdaSwitch: Adaptive Switching Generation for Knowledge Distillation
Figure 3 for AdaSwitch: Adaptive Switching Generation for Knowledge Distillation
Figure 4 for AdaSwitch: Adaptive Switching Generation for Knowledge Distillation
Viaarxiv icon

NAMOUnc: Navigation Among Movable Obstacles with Decision Making on Uncertainty Interval

Add code
Sep 16, 2025
Viaarxiv icon