Network Pruning


Network pruning is a popular approach to reduce a heavy network to obtain a lightweight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on some criteria, and finally fine-tuned to achieve comparable performance with reduced parameters.

FlattenGPT: Depth Compression for Transformer with Layer Flattening

Add code
Feb 09, 2026
Viaarxiv icon

Pruning as a Cooperative Game: Surrogate-Assisted Layer Contribution Estimation for Large Language Models

Add code
Feb 08, 2026
Viaarxiv icon

Pruning at Initialisation through the lens of Graphon Limit: Convergence, Expressivity, and Generalisation

Add code
Feb 06, 2026
Viaarxiv icon

Prism: Spectral Parameter Sharing for Multi-Agent Reinforcement Learning

Add code
Feb 06, 2026
Viaarxiv icon

Topology-Aware Revival for Efficient Sparse Training

Add code
Feb 04, 2026
Viaarxiv icon

It's not a Lottery, it's a Race: Understanding How Gradient Descent Adapts the Network's Capacity to the Task

Add code
Feb 04, 2026
Viaarxiv icon

Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation

Add code
Feb 03, 2026
Viaarxiv icon

GMAC: Global Multi-View Constraint for Automatic Multi-Camera Extrinsic Calibration

Add code
Feb 01, 2026
Viaarxiv icon

E-Globe: Scalable $ε$-Global Verification of Neural Networks via Tight Upper Bounds and Pattern-Aware Branching

Add code
Feb 04, 2026
Viaarxiv icon

Lyapunov Stability-Aware Stackelberg Game for Low-Altitude Economy: A Control-Oriented Pruning-Based DRL Approach

Add code
Feb 01, 2026
Viaarxiv icon