Picture for Aydin Buluc

Aydin Buluc

Ramki

Sparsity-Aware Communication for Distributed Graph Neural Network Training

Add code
Apr 07, 2025
Viaarxiv icon

Scaling Graph Neural Networks for Particle Track Reconstruction

Add code
Apr 07, 2025
Viaarxiv icon

Distributed Matrix-Based Sampling for Graph Neural Network Training

Add code
Nov 06, 2023
Viaarxiv icon

Randomized Algorithms for Scientific Computing (RASC)

Add code
Apr 19, 2021
Figure 1 for Randomized Algorithms for Scientific Computing (RASC)
Figure 2 for Randomized Algorithms for Scientific Computing (RASC)
Figure 3 for Randomized Algorithms for Scientific Computing (RASC)
Figure 4 for Randomized Algorithms for Scientific Computing (RASC)
Viaarxiv icon

PersGNN: Applying Topological Data Analysis and Geometric Deep Learning to Structure-Based Protein Function Prediction

Add code
Oct 30, 2020
Figure 1 for PersGNN: Applying Topological Data Analysis and Geometric Deep Learning to Structure-Based Protein Function Prediction
Figure 2 for PersGNN: Applying Topological Data Analysis and Geometric Deep Learning to Structure-Based Protein Function Prediction
Figure 3 for PersGNN: Applying Topological Data Analysis and Geometric Deep Learning to Structure-Based Protein Function Prediction
Figure 4 for PersGNN: Applying Topological Data Analysis and Geometric Deep Learning to Structure-Based Protein Function Prediction
Viaarxiv icon

Reducing Communication in Graph Neural Network Training

Add code
May 07, 2020
Figure 1 for Reducing Communication in Graph Neural Network Training
Figure 2 for Reducing Communication in Graph Neural Network Training
Figure 3 for Reducing Communication in Graph Neural Network Training
Figure 4 for Reducing Communication in Graph Neural Network Training
Viaarxiv icon

Integrated Model, Batch and Domain Parallelism in Training Neural Networks

Add code
May 16, 2018
Figure 1 for Integrated Model, Batch and Domain Parallelism in Training Neural Networks
Figure 2 for Integrated Model, Batch and Domain Parallelism in Training Neural Networks
Figure 3 for Integrated Model, Batch and Domain Parallelism in Training Neural Networks
Figure 4 for Integrated Model, Batch and Domain Parallelism in Training Neural Networks
Viaarxiv icon

Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation

Add code
Apr 08, 2018
Figure 1 for Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation
Figure 2 for Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation
Figure 3 for Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation
Figure 4 for Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation
Viaarxiv icon