Picture for Wensheng Zhang

Wensheng Zhang

GradMAP: Faster Layer Pruning with Gradient Metric and Projection Compensation

Add code
Feb 16, 2026
Viaarxiv icon

FedProtoKD: Dual Knowledge Distillation with Adaptive Class-wise Prototype Margin for Heterogeneous Federated Learning

Add code
Aug 27, 2025
Viaarxiv icon

Reasoning Multimodal Large Language Model: Data Contamination and Dynamic Evaluation

Add code
Jun 08, 2025
Figure 1 for Reasoning Multimodal Large Language Model: Data Contamination and Dynamic Evaluation
Figure 2 for Reasoning Multimodal Large Language Model: Data Contamination and Dynamic Evaluation
Figure 3 for Reasoning Multimodal Large Language Model: Data Contamination and Dynamic Evaluation
Figure 4 for Reasoning Multimodal Large Language Model: Data Contamination and Dynamic Evaluation
Viaarxiv icon

Is your multimodal large language model a good science tutor?

Add code
May 09, 2025
Viaarxiv icon

Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving

Add code
May 09, 2025
Viaarxiv icon

Is Your Video Language Model a Reliable Judge?

Add code
Mar 07, 2025
Viaarxiv icon

On Fairness of Unified Multimodal Large Language Model for Image Generation

Add code
Feb 05, 2025
Figure 1 for On Fairness of Unified Multimodal Large Language Model for Image Generation
Figure 2 for On Fairness of Unified Multimodal Large Language Model for Image Generation
Figure 3 for On Fairness of Unified Multimodal Large Language Model for Image Generation
Figure 4 for On Fairness of Unified Multimodal Large Language Model for Image Generation
Viaarxiv icon

AdaptGCD: Multi-Expert Adapter Tuning for Generalized Category Discovery

Add code
Oct 29, 2024
Viaarxiv icon

Gradient Projection For Parameter-Efficient Continual Learning

Add code
May 22, 2024
Figure 1 for Gradient Projection For Parameter-Efficient Continual Learning
Figure 2 for Gradient Projection For Parameter-Efficient Continual Learning
Figure 3 for Gradient Projection For Parameter-Efficient Continual Learning
Figure 4 for Gradient Projection For Parameter-Efficient Continual Learning
Viaarxiv icon

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

Add code
Apr 15, 2024
Figure 1 for LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models
Figure 2 for LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models
Figure 3 for LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models
Figure 4 for LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models
Viaarxiv icon