Alert button
Picture for Dan Alistarh

Dan Alistarh

Alert button

How Well Do Sparse Imagenet Models Transfer?

Add code
Bookmark button
Alert button
Dec 02, 2021
Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh

Figure 1 for How Well Do Sparse Imagenet Models Transfer?
Figure 2 for How Well Do Sparse Imagenet Models Transfer?
Figure 3 for How Well Do Sparse Imagenet Models Transfer?
Figure 4 for How Well Do Sparse Imagenet Models Transfer?
Viaarxiv icon

Project CGX: Scalable Deep Learning on Commodity GPUs

Add code
Bookmark button
Alert button
Nov 17, 2021
Ilia Markov, Hamidreza Ramezanikebrya, Dan Alistarh

Figure 1 for Project CGX: Scalable Deep Learning on Commodity GPUs
Figure 2 for Project CGX: Scalable Deep Learning on Commodity GPUs
Figure 3 for Project CGX: Scalable Deep Learning on Commodity GPUs
Figure 4 for Project CGX: Scalable Deep Learning on Commodity GPUs
Viaarxiv icon

Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization

Add code
Bookmark button
Alert button
Jul 09, 2021
Elias Frantar, Eldar Kurtic, Dan Alistarh

Figure 1 for Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization
Figure 2 for Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization
Figure 3 for Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization
Figure 4 for Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization
Viaarxiv icon

SSSE: Efficiently Erasing Samples from Trained Machine Learning Models

Add code
Bookmark button
Alert button
Jul 08, 2021
Alexandra Peste, Dan Alistarh, Christoph H. Lampert

Figure 1 for SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
Figure 2 for SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
Figure 3 for SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
Figure 4 for SSSE: Efficiently Erasing Samples from Trained Machine Learning Models
Viaarxiv icon

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 23, 2021
Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh

Figure 1 for AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Figure 2 for AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Figure 3 for AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Figure 4 for AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Viaarxiv icon

NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization

Add code
Bookmark button
Alert button
May 01, 2021
Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, Daniel M. Roy

Figure 1 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Figure 2 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Figure 3 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Figure 4 for NUQSGD: Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization
Viaarxiv icon

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

Add code
Bookmark button
Alert button
Jan 31, 2021
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste

Figure 1 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Figure 2 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Figure 3 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Figure 4 for Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Viaarxiv icon