Picture for Jianchun Liu

Jianchun Liu

Beyond Physical Labels: Redefining Domains for Robust WiFi-based Gesture Recognition

Add code
Jan 08, 2026
Viaarxiv icon

SABlock: Semantic-Aware KV Cache Eviction with Adaptive Compression Block Size

Add code
Oct 26, 2025
Figure 1 for SABlock: Semantic-Aware KV Cache Eviction with Adaptive Compression Block Size
Figure 2 for SABlock: Semantic-Aware KV Cache Eviction with Adaptive Compression Block Size
Figure 3 for SABlock: Semantic-Aware KV Cache Eviction with Adaptive Compression Block Size
Figure 4 for SABlock: Semantic-Aware KV Cache Eviction with Adaptive Compression Block Size
Viaarxiv icon

Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism

Add code
Sep 10, 2025
Figure 1 for Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism
Figure 2 for Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism
Figure 3 for Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism
Figure 4 for Accelerating Mixture-of-Expert Inference with Adaptive Expert Split Mechanism
Viaarxiv icon

Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning

Add code
Jun 06, 2025
Figure 1 for Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning
Figure 2 for Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning
Figure 3 for Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning
Figure 4 for Mitigating Catastrophic Forgetting with Adaptive Transformer Block Expansion in Federated Fine-Tuning
Viaarxiv icon

Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout

Add code
Mar 13, 2025
Figure 1 for Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout
Figure 2 for Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout
Figure 3 for Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout
Figure 4 for Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout
Viaarxiv icon

Collaborative Speculative Inference for Efficient LLM Inference Serving

Add code
Mar 13, 2025
Figure 1 for Collaborative Speculative Inference for Efficient LLM Inference Serving
Figure 2 for Collaborative Speculative Inference for Efficient LLM Inference Serving
Figure 3 for Collaborative Speculative Inference for Efficient LLM Inference Serving
Figure 4 for Collaborative Speculative Inference for Efficient LLM Inference Serving
Viaarxiv icon

A Robust Federated Learning Framework for Undependable Devices at Scale

Add code
Dec 28, 2024
Figure 1 for A Robust Federated Learning Framework for Undependable Devices at Scale
Figure 2 for A Robust Federated Learning Framework for Undependable Devices at Scale
Figure 3 for A Robust Federated Learning Framework for Undependable Devices at Scale
Figure 4 for A Robust Federated Learning Framework for Undependable Devices at Scale
Viaarxiv icon

Caesar: A Low-deviation Compression Approach for Efficient Federated Learning

Add code
Dec 28, 2024
Viaarxiv icon

Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices

Add code
Dec 28, 2024
Viaarxiv icon

Enhancing Federated Graph Learning via Adaptive Fusion of Structural and Node Characteristics

Add code
Dec 25, 2024
Figure 1 for Enhancing Federated Graph Learning via Adaptive Fusion of Structural and Node Characteristics
Figure 2 for Enhancing Federated Graph Learning via Adaptive Fusion of Structural and Node Characteristics
Figure 3 for Enhancing Federated Graph Learning via Adaptive Fusion of Structural and Node Characteristics
Figure 4 for Enhancing Federated Graph Learning via Adaptive Fusion of Structural and Node Characteristics
Viaarxiv icon