Alert button
Picture for Michael Houston

Michael Houston

Alert button

Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model

Feb 04, 2022
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro

Figure 1 for Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Figure 2 for Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Figure 3 for Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Figure 4 for Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model
Viaarxiv icon

Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs

Oct 29, 2019
Liu Yang, Sean Treichler, Thorsten Kurth, Keno Fischer, David Barajas-Solano, Josh Romero, Valentin Churavy, Alexandre Tartakovsky, Michael Houston, Prabhat, George Karniadakis

Figure 1 for Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs
Figure 2 for Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs
Figure 3 for Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs
Figure 4 for Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs
Viaarxiv icon

Mixed Precision Training

Feb 15, 2018
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu

Figure 1 for Mixed Precision Training
Figure 2 for Mixed Precision Training
Figure 3 for Mixed Precision Training
Figure 4 for Mixed Precision Training
Viaarxiv icon