Abstract:To reduce computational and memory costs in Large Language Models (LLMs), dynamic workload reduction schemes like Mixture of Experts (MoEs), parameter pruning, layer freezing, sparse attention, early token exit, and Mixture of Depths (MoDs) have emerged. However, these methods introduce severe workload imbalances, limiting their practicality for large-scale distributed training. We propose DynMo, an autonomous dynamic load balancing solution that ensures optimal compute distribution when using pipeline parallelism in training dynamic models. DynMo adaptively balances workloads, dynamically packs tasks into fewer workers to free idle resources, and supports both multi-GPU single-node and multi-node systems. Compared to static training methods (Megatron-LM, DeepSpeed), DynMo accelerates training by up to 1.23x (MoEs), 3.18x (pruning), 2.23x (layer freezing), 4.02x (sparse attention), 4.52x (early exit), and 1.17x (MoDs). DynMo is available at https://anonymous.4open.science/r/DynMo-4D04/.
Abstract:Sparse tensor operations are gaining attention in emerging applications such as social networks, deep learning, diagnosis, crime, and review analysis. However, a major obstacle for research in sparse tensor operations is the deficiency of a broad-scale sparse tensor dataset. Another challenge in sparse tensor operations is examining the sparse tensor features, which are not only important for revealing its nonzero pattern but also have a significant impact on determining the best-suited storage format, the decomposition algorithm, and the reordering methods. However, due to the large sizes of real tensors, even extracting these features becomes costly without caution. To address these gaps in the literature, we have developed a smart sparse tensor generator that mimics the substantial features of real sparse tensors. Moreover, we propose various methods for efficiently extracting an extensive set of features for sparse tensors. The effectiveness of our generator is validated through the quality of features and the performance of decomposition in the generated tensors. Both the sparse tensor feature extractor and the tensor generator are open source with all the artifacts available at https://github.com/sparcityeu/feaTen and https://github.com/sparcityeu/genTen, respectively.