Alert button
Picture for Zhengda Bian

Zhengda Bian

Alert button

A Frequency-aware Software Cache for Large Recommendation System Embeddings

Add code
Bookmark button
Alert button
Aug 08, 2022
Jiarui Fang, Geng Zhang, Jiatong Han, Shenggui Li, Zhengda Bian, Yongbin Li, Jin Liu, Yang You

Figure 1 for A Frequency-aware Software Cache for Large Recommendation System Embeddings
Figure 2 for A Frequency-aware Software Cache for Large Recommendation System Embeddings
Figure 3 for A Frequency-aware Software Cache for Large Recommendation System Embeddings
Figure 4 for A Frequency-aware Software Cache for Large Recommendation System Embeddings
Viaarxiv icon

Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Add code
Bookmark button
Alert button
Oct 28, 2021
Zhengda Bian, Hongxin Liu, Boxiang Wang, Haichen Huang, Yongbin Li, Chuanrui Wang, Fan Cui, Yang You

Figure 1 for Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
Figure 2 for Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
Figure 3 for Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
Viaarxiv icon

Online Evolutionary Batch Size Orchestration for Scheduling Deep Learning Workloads in GPU Clusters

Add code
Bookmark button
Alert button
Aug 08, 2021
Zhengda Bian, Shenggui Li, Wei Wang, Yang You

Figure 1 for Online Evolutionary Batch Size Orchestration for Scheduling Deep Learning Workloads in GPU Clusters
Figure 2 for Online Evolutionary Batch Size Orchestration for Scheduling Deep Learning Workloads in GPU Clusters
Figure 3 for Online Evolutionary Batch Size Orchestration for Scheduling Deep Learning Workloads in GPU Clusters
Figure 4 for Online Evolutionary Batch Size Orchestration for Scheduling Deep Learning Workloads in GPU Clusters
Viaarxiv icon

2.5-dimensional distributed model training

Add code
Bookmark button
Alert button
May 30, 2021
Boxiang Wang, Qifan Xu, Zhengda Bian, Yang You

Figure 1 for 2.5-dimensional distributed model training
Figure 2 for 2.5-dimensional distributed model training
Figure 3 for 2.5-dimensional distributed model training
Figure 4 for 2.5-dimensional distributed model training
Viaarxiv icon

Maximizing Parallelism in Distributed Training for Huge Neural Networks

Add code
Bookmark button
Alert button
May 30, 2021
Zhengda Bian, Qifan Xu, Boxiang Wang, Yang You

Figure 1 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Figure 2 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Figure 3 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Figure 4 for Maximizing Parallelism in Distributed Training for Huge Neural Networks
Viaarxiv icon