Alert button
Picture for Mingchao Yu

Mingchao Yu

Alert button

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

Add code
Bookmark button
Alert button
Oct 22, 2019
Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram

Figure 1 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 2 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 3 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 4 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Viaarxiv icon

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Add code
Bookmark button
Alert button
Nov 08, 2018
Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Figure 1 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 2 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 3 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 4 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Bookmark button
Alert button
Nov 08, 2018
Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr

Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon