Picture for Zhiqiang Shen

Zhiqiang Shen

Sink-Aware Pruning for Diffusion Language Models

Add code
Feb 19, 2026
Viaarxiv icon

Pushing the Frontier of Black-Box LVLM Attacks via Fine-Grained Detail Targeting

Add code
Feb 19, 2026
Viaarxiv icon

Fast and Scalable Analytical Diffusion

Add code
Feb 18, 2026
Viaarxiv icon

Next-Gen CAPTCHAs: Leveraging the Cognitive Gap for Scalable and Diverse GUI-Agent Defense

Add code
Feb 09, 2026
Viaarxiv icon

ShapeCond: Fast Shapelet-Guided Dataset Condensation for Time Series Classification

Add code
Feb 09, 2026
Viaarxiv icon

Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift

Add code
Dec 22, 2025
Figure 1 for Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift
Figure 2 for Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift
Figure 3 for Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift
Figure 4 for Hard Labels In! Rethinking the Role of Hard Labels in Mitigating Local Semantic Drift
Viaarxiv icon

Do Not Merge My Model! Safeguarding Open-Source LLMs Against Unauthorized Model Merging

Add code
Nov 13, 2025
Viaarxiv icon

RAGFort: Dual-Path Defense Against Proprietary Knowledge Base Extraction in Retrieval-Augmented Generation

Add code
Nov 13, 2025
Figure 1 for RAGFort: Dual-Path Defense Against Proprietary Knowledge Base Extraction in Retrieval-Augmented Generation
Figure 2 for RAGFort: Dual-Path Defense Against Proprietary Knowledge Base Extraction in Retrieval-Augmented Generation
Figure 3 for RAGFort: Dual-Path Defense Against Proprietary Knowledge Base Extraction in Retrieval-Augmented Generation
Figure 4 for RAGFort: Dual-Path Defense Against Proprietary Knowledge Base Extraction in Retrieval-Augmented Generation
Viaarxiv icon

Attention Is All You Need for KV Cache in Diffusion LLMs

Add code
Oct 16, 2025
Figure 1 for Attention Is All You Need for KV Cache in Diffusion LLMs
Figure 2 for Attention Is All You Need for KV Cache in Diffusion LLMs
Figure 3 for Attention Is All You Need for KV Cache in Diffusion LLMs
Figure 4 for Attention Is All You Need for KV Cache in Diffusion LLMs
Viaarxiv icon

Prompting Test-Time Scaling Is A Strong LLM Reasoning Data Augmentation

Add code
Oct 10, 2025
Viaarxiv icon