Abstract:Pre-training large language models (LLMs) increasingly requires distributed compute, yet bandwidth constraints make it difficult to scale beyond well-provisioned datacenters-especially when model parallelism forces frequent, large inter-device communications. We study whether SparseLoCo, a low-communication data parallel method based on infrequent synchronization and sparse pseudo-gradient exchange, can be combined with low-bandwidth pipeline model parallelism via activation and activation-gradient compression. We introduce a heterogeneous distributed training framework where some participants host full replicas on high-bandwidth interconnects, while resource-limited participants are grouped to jointly instantiate a replica using pipeline parallelism with subspace-projected inter-stage communication. To make the recently introduced subspace pipeline compression compatible with SparseLoCo, we study a number of adaptations. Across large-scale language modeling experiments (178M-1B parameters) on standard pretraining corpora, we find that activation compression composes with SparseLoCo at modest cost, while selective (heterogeneous) compression consistently improves the loss-communication tradeoff relative to compressing all replicas-especially at aggressive compression ratios. These results suggest a practical path to incorporating low-bandwidth model parallelism and heterogeneous participants into LLM pre-training.
Abstract:Data heterogeneity between clients remains a key challenge in Federated Learning (FL), particularly in the case of tabular data. This work presents Global Layers (GL), a novel partial model personalization method robust in the presence of joint distribution $P(X,Y)$ shift and mixed input/output spaces $X \times Y$ across clients. To the best of our knowledge, GL is the first method capable of supporting both client-exclusive features and classes. We introduce two new benchmark experiments for tabular FL naturally partitioned from existing real world datasets: i) UCI Covertype split into 4 clients by "wilderness area" feature, and ii) UCI Heart Disease, SAHeart, UCI Heart Failure, each as clients. Empirical results in these experiments in the full-participant setting show that GL achieves better outcomes than Federated Averaging (FedAvg) and local-only training, with some clients even performing better than their centralized baseline.