Alert button
Picture for Brian Chmiel

Brian Chmiel

Alert button

Bimodal Distributed Binarized Neural Networks

Add code
Bookmark button
Alert button
Apr 05, 2022
Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin

Figure 1 for Bimodal Distributed Binarized Neural Networks
Figure 2 for Bimodal Distributed Binarized Neural Networks
Figure 3 for Bimodal Distributed Binarized Neural Networks
Figure 4 for Bimodal Distributed Binarized Neural Networks
Viaarxiv icon

Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients

Add code
Bookmark button
Alert button
Mar 21, 2022
Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry

Figure 1 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 2 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 3 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 4 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Viaarxiv icon

Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning

Add code
Bookmark button
Alert button
Dec 19, 2021
Brian Chmiel, Ron Banner, Elad Hoffer, Hilla Ben Yaacov, Daniel Soudry

Figure 1 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Figure 2 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Figure 3 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Figure 4 for Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep Learning
Viaarxiv icon

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Add code
Bookmark button
Alert button
Feb 16, 2021
Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Seffi Naor, Daniel Soudry

Figure 1 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 2 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 3 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 4 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Viaarxiv icon

Neural gradients are lognormally distributed: understanding sparse and quantized training

Add code
Bookmark button
Alert button
Jun 17, 2020
Brian Chmiel, Liad Ben-Uri, Moran Shkolnik, Elad Hoffer, Ron Banner, Daniel Soudry

Figure 1 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Figure 2 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Figure 3 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Figure 4 for Neural gradients are lognormally distributed: understanding sparse and quantized training
Viaarxiv icon

Colored Noise Injection for Training Adversarially Robust Neural Networks

Add code
Bookmark button
Alert button
Mar 20, 2020
Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein

Figure 1 for Colored Noise Injection for Training Adversarially Robust Neural Networks
Figure 2 for Colored Noise Injection for Training Adversarially Robust Neural Networks
Viaarxiv icon

Robust Quantization: One Model to Rule Them All

Add code
Bookmark button
Alert button
Feb 18, 2020
Moran Shkolnik, Brian Chmiel, Ron Banner, Gil Shomron, Yuri Nahshan, Alex Bronstein, Uri Weiser

Figure 1 for Robust Quantization: One Model to Rule Them All
Figure 2 for Robust Quantization: One Model to Rule Them All
Figure 3 for Robust Quantization: One Model to Rule Them All
Figure 4 for Robust Quantization: One Model to Rule Them All
Viaarxiv icon

Smoothed Inference for Adversarially-Trained Models

Add code
Bookmark button
Alert button
Nov 17, 2019
Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Alex M. Bronstein, Avi Mendelson

Figure 1 for Smoothed Inference for Adversarially-Trained Models
Figure 2 for Smoothed Inference for Adversarially-Trained Models
Figure 3 for Smoothed Inference for Adversarially-Trained Models
Figure 4 for Smoothed Inference for Adversarially-Trained Models
Viaarxiv icon