Alert button
Picture for Lei Liang

Lei Liang

Alert button

Rethinking Memory and Communication Cost for Efficient Large Language Model Training

Oct 09, 2023
Chan Wu, Hanxiao Zhang, Lin Ju, Jinjing Huang, Youshao Xiao, Zhaoxin Huan, Siyuan Li, Fanzhuang Meng, Lei Liang, Xiaolu Zhang, Jun Zhou

Figure 1 for Rethinking Memory and Communication Cost for Efficient Large Language Model Training
Figure 2 for Rethinking Memory and Communication Cost for Efficient Large Language Model Training
Figure 3 for Rethinking Memory and Communication Cost for Efficient Large Language Model Training
Figure 4 for Rethinking Memory and Communication Cost for Efficient Large Language Model Training

As model sizes and training datasets continue to increase, large-scale model training frameworks reduce memory consumption by various sharding techniques. However, the huge communication overhead reduces the training efficiency, especially in public cloud environments with varying network bandwidths. In this paper, we rethink the impact of memory consumption and communication overhead on the training speed of large language model, and propose a memory-communication balanced \underline{Pa}rtial \underline{R}edundancy \underline{O}ptimizer (PaRO). PaRO reduces the amount and frequency of inter-group communication by grouping GPU clusters and introducing minor intra-group memory redundancy, thereby improving the training efficiency of the model. Additionally, we propose a Hierarchical Overlapping Ring (HO-Ring) communication topology to enhance communication efficiency between nodes or across switches in large model training. Our experiments demonstrate that the HO-Ring algorithm improves communication efficiency by 32.6\% compared to the traditional Ring algorithm. Compared to the baseline ZeRO, PaRO significantly improves training throughput by 1.2x-2.6x and achieves a near-linear scalability. Therefore, the PaRO strategy provides more fine-grained options for the trade-off between memory consumption and communication overhead in different training scenarios.

Viaarxiv icon

CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification

Apr 26, 2020
Keyu Yang, Yunjun Gao, Lei Liang, Song Bian, Lu Chen, Baihua Zheng

Figure 1 for CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification
Figure 2 for CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification
Figure 3 for CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification
Figure 4 for CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification

Sentiment classification is a fundamental task in content analysis. Although deep learning has demonstrated promising performance in text classification compared with shallow models, it is still not able to train a satisfying classifier for text sentiment. Human beings are more sophisticated than machine learning models in terms of understanding and capturing the emotional polarities of texts. In this paper, we leverage the power of human intelligence into text sentiment classification. We propose Crowd-based neural networks for Text Sentiment Classification (CrowdTSC for short). We design and post the questions on a crowdsourcing platform to collect the keywords in texts. Sampling and clustering are utilized to reduce the cost of crowdsourcing. Also, we present an attention-based neural network and a hybrid neural network, which incorporate the collected keywords as human being's guidance into deep neural networks. Extensive experiments on public datasets confirm that CrowdTSC outperforms state-of-the-art models, justifying the effectiveness of crowd-based keyword guidance.

Viaarxiv icon