Foundation models learn transferable representations, motivating growing interest in their application to wireless systems. Existing wireless foundation models are predominantly based on transformer architectures, whose quadratic computational and memory complexity can hinder practical deployment for large-scale channels. In this work, we introduce WiMamba, a wireless foundation model built upon the recently proposed Mamba architecture, which replaces attention mechanisms with selective state-space models and enables linear-time sequence modeling. Leveraging this architectural advantage combined with adaptive preprocessing, WiMamba achieves scalable and low-latency inference while maintaining strong representational expressivity. We further develop a dedicated task-agnostic, self-supervised pre-training framework tailored to wireless channels, resulting in a genuine foundation model that learns transferable channel representations. Evaluations across four downstream tasks demonstrate that WiMamba matches or outperforms transformer-based wireless foundation models, while offering dramatic latency and memory reductions.