Alert button
Picture for Tal Ben-Nun

Tal Ben-Nun

Alert button

VENOM: A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores

Oct 03, 2023
Roberto L. Castro, Andrei Ivanov, Diego Andrade, Tal Ben-Nun, Basilio B. Fraguela, Torsten Hoefler

Viaarxiv icon

Cached Operator Reordering: A Unified View for Fast GNN Training

Aug 23, 2023
Julia Bazinska, Andrei Ivanov, Tal Ben-Nun, Nikoli Dryden, Maciej Besta, Siyuan Shen, Torsten Hoefler

Figure 1 for Cached Operator Reordering: A Unified View for Fast GNN Training
Figure 2 for Cached Operator Reordering: A Unified View for Fast GNN Training
Figure 3 for Cached Operator Reordering: A Unified View for Fast GNN Training
Figure 4 for Cached Operator Reordering: A Unified View for Fast GNN Training
Viaarxiv icon

STen: Productive and Efficient Sparsity in PyTorch

Apr 15, 2023
Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Saleh Ashkboos, Torsten Hoefler

Figure 1 for STen: Productive and Efficient Sparsity in PyTorch
Figure 2 for STen: Productive and Efficient Sparsity in PyTorch
Figure 3 for STen: Productive and Efficient Sparsity in PyTorch
Figure 4 for STen: Productive and Efficient Sparsity in PyTorch
Viaarxiv icon

Performance Embeddings: A Similarity-based Approach to Automatic Performance Optimization

Mar 14, 2023
Lukas Trümper, Tal Ben-Nun, Philipp Schaad, Alexandru Calotoiu, Torsten Hoefler

Figure 1 for Performance Embeddings: A Similarity-based Approach to Automatic Performance Optimization
Figure 2 for Performance Embeddings: A Similarity-based Approach to Automatic Performance Optimization
Figure 3 for Performance Embeddings: A Similarity-based Approach to Automatic Performance Optimization
Figure 4 for Performance Embeddings: A Similarity-based Approach to Automatic Performance Optimization
Viaarxiv icon

A Theory of I/O-Efficient Sparse Neural Network Inference

Jan 03, 2023
Niels Gleinig, Tal Ben-Nun, Torsten Hoefler

Figure 1 for A Theory of I/O-Efficient Sparse Neural Network Inference
Figure 2 for A Theory of I/O-Efficient Sparse Neural Network Inference
Figure 3 for A Theory of I/O-Efficient Sparse Neural Network Inference
Figure 4 for A Theory of I/O-Efficient Sparse Neural Network Inference
Viaarxiv icon

ENS-10: A Dataset For Post-Processing Ensemble Weather Forecast

Jun 29, 2022
Saleh Ashkboos, Langwen Huang, Nikoli Dryden, Tal Ben-Nun, Peter Dueben, Lukas Gianinazzi, Luca Kummer, Torsten Hoefler

Figure 1 for ENS-10: A Dataset For Post-Processing Ensemble Weather Forecast
Figure 2 for ENS-10: A Dataset For Post-Processing Ensemble Weather Forecast
Figure 3 for ENS-10: A Dataset For Post-Processing Ensemble Weather Forecast
Figure 4 for ENS-10: A Dataset For Post-Processing Ensemble Weather Forecast
Viaarxiv icon

A Data-Centric Optimization Framework for Machine Learning

Oct 20, 2021
Oliver Rausch, Tal Ben-Nun, Nikoli Dryden, Andrei Ivanov, Shigang Li, Torsten Hoefler

Figure 1 for A Data-Centric Optimization Framework for Machine Learning
Figure 2 for A Data-Centric Optimization Framework for Machine Learning
Figure 3 for A Data-Centric Optimization Framework for Machine Learning
Figure 4 for A Data-Centric Optimization Framework for Machine Learning
Viaarxiv icon

Learning Combinatorial Node Labeling Algorithms

Jun 15, 2021
Lukas Gianinazzi, Maximilian Fries, Nikoli Dryden, Tal Ben-Nun, Maciej Besta, Torsten Hoefler

Figure 1 for Learning Combinatorial Node Labeling Algorithms
Figure 2 for Learning Combinatorial Node Labeling Algorithms
Figure 3 for Learning Combinatorial Node Labeling Algorithms
Figure 4 for Learning Combinatorial Node Labeling Algorithms
Viaarxiv icon

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

Jan 31, 2021
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste

Figure 1 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Figure 2 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Figure 3 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Figure 4 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Viaarxiv icon