Picture for Satoshi Matsuoka

Satoshi Matsuoka

Myths and Legends in High-Performance Computing

Add code
Jan 06, 2023
Figure 1 for Myths and Legends in High-Performance Computing
Figure 2 for Myths and Legends in High-Performance Computing
Figure 3 for Myths and Legends in High-Performance Computing
Viaarxiv icon

MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems

Add code
Oct 26, 2021
Figure 1 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Figure 2 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Figure 3 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Figure 4 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Viaarxiv icon

Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA

Add code
Aug 26, 2020
Figure 1 for Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA
Figure 2 for Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA
Figure 3 for Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA
Figure 4 for Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA
Viaarxiv icon

The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism

Add code
Jul 25, 2020
Figure 1 for The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism
Figure 2 for The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism
Figure 3 for The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism
Figure 4 for The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism
Viaarxiv icon

Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs

Add code
Dec 05, 2018
Figure 1 for Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs
Figure 2 for Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs
Figure 3 for Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs
Figure 4 for Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs
Viaarxiv icon

μ-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching

Add code
Apr 13, 2018
Figure 1 for μ-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching
Figure 2 for μ-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching
Figure 3 for μ-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching
Figure 4 for μ-cuDNN: Accelerating Deep Learning Frameworks with Micro-Batching
Viaarxiv icon