Alert button
Picture for Tamara Norman

Tamara Norman

Alert button

PartIR: Composing SPMD Partitioning Strategies for Machine Learning

Add code
Bookmark button
Alert button
Jan 23, 2024
Sami Alabed, Bart Chrzaszcz, Juliana Franco, Dominik Grewe, Dougal Maclaurin, James Molloy, Tom Natan, Tamara Norman, Xiaoyue Pan, Adam Paszke, Norman A. Rink, Michael Schaarschmidt, Timur Sitdikov, Agnieszka Swietlik, Dimitrios Vytiniotis, Joel Wee

Viaarxiv icon

Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR

Add code
Bookmark button
Alert button
Oct 07, 2022
Sami Alabed, Dominik Grewe, Juliana Franco, Bart Chrzaszcz, Tom Natan, Tamara Norman, Norman A. Rink, Dimitrios Vytiniotis, Michael Schaarschmidt

Figure 1 for Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR
Figure 2 for Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR
Figure 3 for Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR
Figure 4 for Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR
Viaarxiv icon

Automap: Towards Ergonomic Automated Parallelism for ML Models

Add code
Bookmark button
Alert button
Dec 06, 2021
Michael Schaarschmidt, Dominik Grewe, Dimitrios Vytiniotis, Adam Paszke, Georg Stefan Schmid, Tamara Norman, James Molloy, Jonathan Godwin, Norman Alexander Rink, Vinod Nair, Dan Belov

Figure 1 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Figure 2 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Figure 3 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Figure 4 for Automap: Towards Ergonomic Automated Parallelism for ML Models
Viaarxiv icon

Synthesizing Optimal Parallelism Placement and Reduction Strategies on Hierarchical Systems for Deep Learning

Add code
Bookmark button
Alert button
Oct 20, 2021
Ningning Xie, Tamara Norman, Dominik Grewe, Dimitrios Vytiniotis

Figure 1 for Synthesizing Optimal Parallelism Placement and Reduction Strategies on Hierarchical Systems for Deep Learning
Figure 2 for Synthesizing Optimal Parallelism Placement and Reduction Strategies on Hierarchical Systems for Deep Learning
Figure 3 for Synthesizing Optimal Parallelism Placement and Reduction Strategies on Hierarchical Systems for Deep Learning
Figure 4 for Synthesizing Optimal Parallelism Placement and Reduction Strategies on Hierarchical Systems for Deep Learning
Viaarxiv icon

Acme: A Research Framework for Distributed Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 01, 2020
Matt Hoffman, Bobak Shahriari, John Aslanides, Gabriel Barth-Maron, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, Sarah Henderson, Alex Novikov, Sergio Gómez Colmenarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Andrew Cowie, Ziyu Wang, Bilal Piot, Nando de Freitas

Figure 1 for Acme: A Research Framework for Distributed Reinforcement Learning
Figure 2 for Acme: A Research Framework for Distributed Reinforcement Learning
Figure 3 for Acme: A Research Framework for Distributed Reinforcement Learning
Figure 4 for Acme: A Research Framework for Distributed Reinforcement Learning
Viaarxiv icon