Alert button
Picture for Alexander Heinecke

Alexander Heinecke

Alert button

Microscaling Data Formats for Deep Learning

Add code
Bookmark button
Alert button
Oct 19, 2023
Bita Darvish Rouhani, Ritchie Zhao, Ankit More, Mathew Hall, Alireza Khodamoradi, Summer Deng, Dhruv Choudhary, Marius Cornea, Eric Dellinger, Kristof Denolf, Stosic Dusan, Venmugil Elango, Maximilian Golub, Alexander Heinecke, Phil James-Roxby, Dharmesh Jani, Gaurav Kolhe, Martin Langhammer, Ada Li, Levi Melnick, Maral Mesmakhosroshahi, Andres Rodriguez, Michael Schulte, Rasoul Shafipour, Lei Shao, Michael Siu, Pradeep Dubey, Paulius Micikevicius, Maxim Naumov, Colin Verrilli, Ralph Wittig, Doug Burger, Eric Chung

Figure 1 for Microscaling Data Formats for Deep Learning
Figure 2 for Microscaling Data Formats for Deep Learning
Figure 3 for Microscaling Data Formats for Deep Learning
Figure 4 for Microscaling Data Formats for Deep Learning
Viaarxiv icon

Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures

Add code
Bookmark button
Alert button
Apr 25, 2023
Evangelos Georganas, Dhiraj Kalamkar, Kirill Voronin, Antonio Noack, Hans Pabst, Alexander Breuer, Alexander Heinecke

Figure 1 for Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures
Figure 2 for Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures
Figure 3 for Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures
Figure 4 for Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures
Viaarxiv icon

FP8 Formats for Deep Learning

Add code
Bookmark button
Alert button
Sep 12, 2022
Paulius Micikevicius, Dusan Stosic, Neil Burgess, Marius Cornea, Pradeep Dubey, Richard Grisenthwaite, Sangwon Ha, Alexander Heinecke, Patrick Judd, John Kamalu, Naveen Mellempudi, Stuart Oberman, Mohammad Shoeybi, Michael Siu, Hao Wu

Figure 1 for FP8 Formats for Deep Learning
Figure 2 for FP8 Formats for Deep Learning
Figure 3 for FP8 Formats for Deep Learning
Figure 4 for FP8 Formats for Deep Learning
Viaarxiv icon

FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems

Add code
Bookmark button
Alert button
Apr 22, 2022
Rui Ma, Evangelos Georganas, Alexander Heinecke, Andrew Boutros, Eriko Nurvitadhi

Figure 1 for FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems
Figure 2 for FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems
Figure 3 for FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems
Figure 4 for FPGA-based AI Smart NICs for Scalable Distributed AI Training Systems
Viaarxiv icon

DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks

Add code
Bookmark button
Alert button
Apr 16, 2021
Vasimuddin Md, Sanchit Misra, Guixiang Ma, Ramanarayan Mohanty, Evangelos Georganas, Alexander Heinecke, Dhiraj Kalamkar, Nesreen K. Ahmed, Sasikanth Avancha

Figure 1 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Figure 2 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Figure 3 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Figure 4 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Viaarxiv icon

Efficient and Generic 1D Dilated Convolution Layer for Deep Learning

Add code
Bookmark button
Alert button
Apr 16, 2021
Narendra Chaudhary, Sanchit Misra, Dhiraj Kalamkar, Alexander Heinecke, Evangelos Georganas, Barukh Ziv, Menachem Adelman, Bharat Kaul

Figure 1 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 2 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 3 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 4 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Viaarxiv icon

Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads

Add code
Bookmark button
Alert button
Apr 14, 2021
Evangelos Georganas, Dhiraj Kalamkar, Sasikanth Avancha, Menachem Adelman, Cristina Anderson, Alexander Breuer, Narendra Chaudhary, Abhisek Kundu, Vasimuddin Md, Sanchit Misra, Ramanarayan Mohanty, Hans Pabst, Barukh Ziv, Alexander Heinecke

Figure 1 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 2 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 3 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 4 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Viaarxiv icon

PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives

Add code
Bookmark button
Alert button
Jun 02, 2020
Sanket Tavarageri, Alexander Heinecke, Sasikanth Avancha, Gagandeep Goyal, Ramakrishna Upadrasta, Bharat Kaul

Figure 1 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 2 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 3 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 4 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Viaarxiv icon