Alert button
Picture for Youjie Li

Youjie Li

Alert button

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling

Mar 26, 2022
Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin

Figure 1 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 2 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 3 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 4 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Viaarxiv icon

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Mar 20, 2022
Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, Yingyan Lin

Figure 1 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 2 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 3 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 4 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Viaarxiv icon

Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers

Feb 02, 2022
Youjie Li, Amar Phanishayee, Derek Murray, Jakub Tarnawski, Nam Sung Kim

Figure 1 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 2 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 3 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 4 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Viaarxiv icon

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Nov 08, 2018
Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Figure 1 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 2 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 3 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 4 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Nov 08, 2018
Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr

Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon