Picture for Jinliang Wei

Jinliang Wei

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

Add code
May 25, 2021
Figure 1 for Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads
Figure 2 for Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads
Figure 3 for Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads
Figure 4 for Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads
Viaarxiv icon

Priority-based Parameter Propagation for Distributed DNN Training

Add code
May 10, 2019
Figure 1 for Priority-based Parameter Propagation for Distributed DNN Training
Figure 2 for Priority-based Parameter Propagation for Distributed DNN Training
Figure 3 for Priority-based Parameter Propagation for Distributed DNN Training
Figure 4 for Priority-based Parameter Propagation for Distributed DNN Training
Viaarxiv icon

Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters

Add code
Jun 11, 2017
Figure 1 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Figure 2 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Figure 3 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Figure 4 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Viaarxiv icon

Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines

Add code
Dec 19, 2015
Figure 1 for Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
Figure 2 for Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
Figure 3 for Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
Figure 4 for Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
Viaarxiv icon

Petuum: A New Platform for Distributed Machine Learning on Big Data

Add code
May 14, 2015
Figure 1 for Petuum: A New Platform for Distributed Machine Learning on Big Data
Figure 2 for Petuum: A New Platform for Distributed Machine Learning on Big Data
Figure 3 for Petuum: A New Platform for Distributed Machine Learning on Big Data
Figure 4 for Petuum: A New Platform for Distributed Machine Learning on Big Data
Viaarxiv icon

LightLDA: Big Topic Models on Modest Compute Clusters

Add code
Dec 04, 2014
Figure 1 for LightLDA: Big Topic Models on Modest Compute Clusters
Figure 2 for LightLDA: Big Topic Models on Modest Compute Clusters
Figure 3 for LightLDA: Big Topic Models on Modest Compute Clusters
Figure 4 for LightLDA: Big Topic Models on Modest Compute Clusters
Viaarxiv icon

High-Performance Distributed ML at Scale through Parameter Server Consistency Models

Add code
Oct 29, 2014
Figure 1 for High-Performance Distributed ML at Scale through Parameter Server Consistency Models
Figure 2 for High-Performance Distributed ML at Scale through Parameter Server Consistency Models
Viaarxiv icon

Consistent Bounded-Asynchronous Parameter Servers for Distributed ML

Add code
Dec 31, 2013
Figure 1 for Consistent Bounded-Asynchronous Parameter Servers for Distributed ML
Figure 2 for Consistent Bounded-Asynchronous Parameter Servers for Distributed ML
Figure 3 for Consistent Bounded-Asynchronous Parameter Servers for Distributed ML
Figure 4 for Consistent Bounded-Asynchronous Parameter Servers for Distributed ML
Viaarxiv icon