Alert button
Picture for Itay Hubara

Itay Hubara

Alert button

Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators

Jan 25, 2024
Yaniv Blumenfeld, Itay Hubara, Daniel Soudry

Viaarxiv icon

Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients

Mar 21, 2022
Brian Chmiel, Itay Hubara, Ron Banner, Daniel Soudry

Figure 1 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 2 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 3 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Figure 4 for Optimal Fine-Grained N:M sparsity for Activations and Neural Gradients
Viaarxiv icon

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Feb 16, 2021
Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Seffi Naor, Daniel Soudry

Figure 1 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 2 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 3 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Figure 4 for Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Viaarxiv icon

Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming

Jun 14, 2020
Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, Daniel Soudry

Figure 1 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 2 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 3 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Figure 4 for Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Viaarxiv icon

The Knowledge Within: Methods for Data-Free Model Compression

Dec 03, 2019
Matan Haroush, Itay Hubara, Elad Hoffer, Daniel Soudry

Figure 1 for The Knowledge Within: Methods for Data-Free Model Compression
Figure 2 for The Knowledge Within: Methods for Data-Free Model Compression
Figure 3 for The Knowledge Within: Methods for Data-Free Model Compression
Figure 4 for The Knowledge Within: Methods for Data-Free Model Compression
Viaarxiv icon

MLPerf Inference Benchmark

Nov 06, 2019
Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, Ramesh Chukka, Cody Coleman, Sam Davis, Pan Deng, Greg Diamos, Jared Duke, Dave Fick, J. Scott Gardner, Itay Hubara, Sachin Idgunji, Thomas B. Jablin, Jeff Jiao, Tom St. John, Pankaj Kanwar, David Lee, Jeffery Liao, Anton Lokhmotov, Francisco Massa, Peng Meng, Paulius Micikevicius, Colin Osborne, Gennady Pekhimenko, Arun Tejusve Raghunath Rajan, Dilip Sequeira, Ashish Sirasao, Fei Sun, Hanlin Tang, Michael Thomson, Frank Wei, Ephrem Wu, Lingjie Xu, Koichi Yamada, Bing Yu, George Yuan, Aaron Zhong, Peizhao Zhang, Yuchen Zhou

Figure 1 for MLPerf Inference Benchmark
Figure 2 for MLPerf Inference Benchmark
Figure 3 for MLPerf Inference Benchmark
Figure 4 for MLPerf Inference Benchmark
Viaarxiv icon

Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency

Aug 12, 2019
Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry

Figure 1 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Figure 2 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Figure 3 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Figure 4 for Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency
Viaarxiv icon

Augment your batch: better training with larger batches

Jan 27, 2019
Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry

Figure 1 for Augment your batch: better training with larger batches
Figure 2 for Augment your batch: better training with larger batches
Figure 3 for Augment your batch: better training with larger batches
Figure 4 for Augment your batch: better training with larger batches
Viaarxiv icon

Scalable Methods for 8-bit Training of Neural Networks

Jun 17, 2018
Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry

Figure 1 for Scalable Methods for 8-bit Training of Neural Networks
Figure 2 for Scalable Methods for 8-bit Training of Neural Networks
Figure 3 for Scalable Methods for 8-bit Training of Neural Networks
Viaarxiv icon

Fix your classifier: the marginal value of training the last weight layer

Mar 20, 2018
Elad Hoffer, Itay Hubara, Daniel Soudry

Figure 1 for Fix your classifier: the marginal value of training the last weight layer
Figure 2 for Fix your classifier: the marginal value of training the last weight layer
Figure 3 for Fix your classifier: the marginal value of training the last weight layer
Figure 4 for Fix your classifier: the marginal value of training the last weight layer
Viaarxiv icon