Alert button
Picture for Ron Banner

Ron Banner

Alert button

DropCompute: simple and more robust distributed synchronous training via compute variance reduction

Add code
Bookmark button
Alert button
Jun 18, 2023
Niv Giladi, Shahar Gottlieb, Moran Shkolnik, Asaf Karnieli, Ron Banner, Elad Hoffer, Kfir Yehuda Levy, Daniel Soudry

Figure 1 for DropCompute: simple and more robust distributed synchronous training via compute variance reduction
Figure 2 for DropCompute: simple and more robust distributed synchronous training via compute variance reduction
Figure 3 for DropCompute: simple and more robust distributed synchronous training via compute variance reduction
Figure 4 for DropCompute: simple and more robust distributed synchronous training via compute variance reduction
Viaarxiv icon

Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients

Add code
Bookmark button
Alert button
Mar 21, 2022
Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry

Figure 1 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 2 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 3 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 4 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Viaarxiv icon

Energy awareness in low precision neural networks

Add code
Bookmark button
Alert button
Feb 06, 2022
Nurit Spingarn Eliezer, Ron Banner, Elad Hoffer, Hilla Ben-Yaakov, Tomer Michaeli

Figure 1 for Energy awareness in low precision neural networks
Figure 2 for Energy awareness in low precision neural networks
Figure 3 for Energy awareness in low precision neural networks
Figure 4 for Energy awareness in low precision neural networks
Viaarxiv icon

On Recoverability of Graph Neural Network Representations

Add code
Bookmark button
Alert button
Jan 30, 2022
Maxim Fishman, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Avi Mendelson

Figure 1 for On Recoverability of Graph Neural Network Representations
Figure 2 for On Recoverability of Graph Neural Network Representations
Figure 3 for On Recoverability of Graph Neural Network Representations
Figure 4 for On Recoverability of Graph Neural Network Representations
Viaarxiv icon

Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning

Add code
Bookmark button
Alert button
Dec 19, 2021
Brian Chmiel, Ron Banner, Elad Hoffer, Hilla Ben Yaacov, Daniel Soudry

Figure 1 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Figure 2 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Figure 3 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Figure 4 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Viaarxiv icon

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Add code
Bookmark button
Alert button
Feb 16, 2021
Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Seffi Naor, Daniel Soudry

Figure 1 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 2 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 3 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 4 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Viaarxiv icon

GAN Steerability without optimization

Add code
Bookmark button
Alert button
Dec 09, 2020
Nurit Spingarn-Eliezer, Ron Banner, Tomer Michaeli

Figure 1 for GAN Steerability without optimization
Figure 2 for GAN Steerability without optimization
Figure 3 for GAN Steerability without optimization
Figure 4 for GAN Steerability without optimization
Viaarxiv icon

Neural gradients are lognormally distributed: understanding sparse and quantized training

Add code
Bookmark button
Alert button
Jun 17, 2020
Brian Chmiel, Liad Ben-Uri, Moran Shkolnik, Elad Hoffer, Ron Banner, Daniel Soudry

Figure 1 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Figure 2 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Figure 3 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Figure 4 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Viaarxiv icon

Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming

Add code
Bookmark button
Alert button
Jun 14, 2020
Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, Daniel Soudry

Figure 1 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 2 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 3 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 4 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Viaarxiv icon