Picture for Zhenheng Tang

Zhenheng Tang

GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging

Add code
Aug 26, 2025
Viaarxiv icon

AnTKV: Anchor Token-Aware Sub-Bit Vector Quantization for KV Cache in Large Language Models

Add code
Jun 24, 2025
Viaarxiv icon

Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression

Add code
May 26, 2025
Viaarxiv icon

Assessing Judging Bias in Large Reasoning Models: An Empirical Study

Add code
Apr 14, 2025
Viaarxiv icon

The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?

Add code
Feb 24, 2025
Viaarxiv icon

One-shot Federated Learning Methods: A Practical Guide

Add code
Feb 13, 2025
Viaarxiv icon

Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing

Add code
Feb 06, 2025
Viaarxiv icon

Can LLMs Maintain Fundamental Abilities under KV Cache Compression?

Add code
Feb 04, 2025
Figure 1 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Figure 2 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Figure 3 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Figure 4 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Viaarxiv icon

FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models

Add code
Jan 18, 2025
Figure 1 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Figure 2 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Figure 3 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Figure 4 for FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models
Viaarxiv icon

What Limits LLM-based Human Simulation: LLMs or Our Design?

Add code
Jan 15, 2025
Viaarxiv icon