Picture for Haibing Guan

Haibing Guan

Shanghai Jiao Tong University

FedMomentum: Preserving LoRA Training Momentum in Federated Fine-Tuning

Add code
Mar 09, 2026
Viaarxiv icon

SettleFL: Trustless and Scalable Reward Settlement Protocol for Federated Learning on Permissionless Blockchains (Extended version)

Add code
Feb 26, 2026
Viaarxiv icon

pFedNavi: Structure-Aware Personalized Federated Vision-Language Navigation for Embodied AI

Add code
Feb 16, 2026
Viaarxiv icon

HyperOffload: Graph-Driven Hierarchical Memory Management for Large Language Models on SuperNode Architectures

Add code
Feb 03, 2026
Viaarxiv icon

SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization

Add code
Nov 11, 2025
Figure 1 for SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization
Figure 2 for SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization
Figure 3 for SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization
Figure 4 for SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization
Viaarxiv icon

QUARK: Quantization-Enabled Circuit Sharing for Transformer Acceleration by Exploiting Common Patterns in Nonlinear Operations

Add code
Nov 10, 2025
Viaarxiv icon

POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning

Add code
Oct 21, 2025
Figure 1 for POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning
Figure 2 for POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning
Figure 3 for POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning
Figure 4 for POLAR: Policy-based Layerwise Reinforcement Learning Method for Stealthy Backdoor Attacks in Federated Learning
Viaarxiv icon

Dissecting the Impact of Mobile DVFS Governors on LLM Inference Performance and Energy Efficiency

Add code
Jul 02, 2025
Viaarxiv icon

DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies

Add code
May 23, 2025
Figure 1 for DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies
Figure 2 for DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies
Figure 3 for DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies
Figure 4 for DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies
Viaarxiv icon

Samoyeds: Accelerating MoE Models with Structured Sparsity Leveraging Sparse Tensor Cores

Add code
Mar 13, 2025
Viaarxiv icon