Abstract:State-space models (SSMs), particularly the Mamba architecture, have emerged as powerful alternatives to Transformers for sequence modeling, offering linear-time complexity and competitive performance across diverse tasks. However, their large parameter counts pose significant challenges for deployment in resource-constrained environments. We propose a novel unstructured pruning framework tailored for Mamba models that achieves up to 70\% parameter reduction while retaining over 95\% of the original performance. Our approach integrates three key innovations: (1) a gradient-aware magnitude pruning technique that combines weight magnitude and gradient information to identify less critical parameters, (2) an iterative pruning schedule that gradually increases sparsity to maintain model stability, and (3) a global pruning strategy that optimizes parameter allocation across the entire model. Through extensive experiments on WikiText-103, Long Range Arena, and ETT time-series benchmarks, we demonstrate significant efficiency gains with minimal performance degradation. Our analysis of pruning effects on Mamba's components reveals critical insights into the architecture's redundancy and robustness, enabling practical deployment in resource-constrained settings while broadening Mamba's applicability.
Abstract:Integrating large language models (LLMs) as priors in reinforcement learning (RL) offers significant advantages but comes with substantial computational costs. We present a principled cache-efficient framework for posterior sampling with LLM-derived priors that dramatically reduces these costs while maintaining high performance. At the core of our approach is an adaptive caching mechanism, where cache parameters are meta-optimized using surrogate gradients derived from policy performance. This design enables efficient inference across both discrete text environments (e.g., TextWorld, ALFWorld) and continuous control domains (e.g., MuJoCo), achieving a 3.8--4.7$\times$ reduction in LLM queries and 4.0--12.0$\times$ lower median latencies (85--93\,ms on a consumer GPU) while retaining 96--98\% of uncached performance. Our theoretical analysis provides KL divergence bounds on approximation quality, validated empirically. The framework extends to offline RL, where our CQL-Prior variant improves performance by 14--29\% and reduces training time by 38--40\%. Extensive evaluations across a diverse suite of eight tasks demonstrate the generalizability and practical viability of LLM-guided RL in resource-constrained settings.
Abstract:Quantum machine learning for spin and molecular systems faces critical challenges of scarce labeled data and computationally expensive simulations. To address these limitations, we introduce Hamiltonian-Masked Autoencoding (HMAE), a novel self-supervised framework that pre-trains transformers on unlabeled quantum Hamiltonians, enabling efficient few-shot transfer learning. Unlike random masking approaches, HMAE employs a physics-informed strategy based on quantum information theory to selectively mask Hamiltonian terms based on their physical significance. Experiments on 12,500 quantum Hamiltonians (60% real-world, 40% synthetic) demonstrate that HMAE achieves 85.3% $\pm$ 1.5% accuracy in phase classification and 0.15 $\pm$ 0.02 eV MAE in ground state energy prediction with merely 10 labeled examples - a statistically significant improvement (p < 0.01) over classical graph neural networks (78.1% $\pm$ 2.1%) and quantum neural networks (76.8% $\pm$ 2.3%). Our method's primary advantage is exceptional sample efficiency - reducing required labeled examples by 3-5x compared to baseline methods - though we emphasize that ground truth values for fine-tuning and evaluation still require exact diagonalization or tensor networks. We explicitly acknowledge that our current approach is limited to small quantum systems (specifically limited to 12 qubits during training, with limited extension to 16-20 qubits in testing) and that, while promising within this regime, this size restriction prevents immediate application to larger systems of practical interest in materials science and quantum chemistry.