Alert button
Picture for Boris Murmann

Boris Murmann

Alert button

Improving the Energy Efficiency and Robustness of tinyML Computer Vision using Log-Gradient Input Images

Add code
Bookmark button
Alert button
Mar 04, 2022
Qianyun Lu, Boris Murmann

Figure 1 for Improving the Energy Efficiency and Robustness of tinyML Computer Vision using Log-Gradient Input Images
Figure 2 for Improving the Energy Efficiency and Robustness of tinyML Computer Vision using Log-Gradient Input Images
Figure 3 for Improving the Energy Efficiency and Robustness of tinyML Computer Vision using Log-Gradient Input Images
Figure 4 for Improving the Energy Efficiency and Robustness of tinyML Computer Vision using Log-Gradient Input Images
Viaarxiv icon

Low-Rank Training of Deep Neural Networks for Emerging Memory Technology

Add code
Bookmark button
Alert button
Sep 08, 2020
Albert Gural, Phillip Nadeau, Mehul Tikekar, Boris Murmann

Figure 1 for Low-Rank Training of Deep Neural Networks for Emerging Memory Technology
Figure 2 for Low-Rank Training of Deep Neural Networks for Emerging Memory Technology
Figure 3 for Low-Rank Training of Deep Neural Networks for Emerging Memory Technology
Figure 4 for Low-Rank Training of Deep Neural Networks for Emerging Memory Technology
Viaarxiv icon

Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory

Add code
Bookmark button
Alert button
Feb 25, 2020
Elaina Chai, Mert Pilanci, Boris Murmann

Figure 1 for Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
Figure 2 for Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
Figure 3 for Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
Figure 4 for Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
Viaarxiv icon

BinarEye: An Always-On Energy-Accuracy-Scalable Binary CNN Processor With All Memory On Chip in 28nm CMOS

Add code
Bookmark button
Alert button
Apr 16, 2018
Bert Moons, Daniel Bankman, Lita Yang, Boris Murmann, Marian Verhelst

Figure 1 for BinarEye: An Always-On Energy-Accuracy-Scalable Binary CNN Processor With All Memory On Chip in 28nm CMOS
Figure 2 for BinarEye: An Always-On Energy-Accuracy-Scalable Binary CNN Processor With All Memory On Chip in 28nm CMOS
Figure 3 for BinarEye: An Always-On Energy-Accuracy-Scalable Binary CNN Processor With All Memory On Chip in 28nm CMOS
Figure 4 for BinarEye: An Always-On Energy-Accuracy-Scalable Binary CNN Processor With All Memory On Chip in 28nm CMOS
Viaarxiv icon

Convolutional Neural Networks using Logarithmic Data Representation

Add code
Bookmark button
Alert button
Mar 17, 2016
Daisuke Miyashita, Edward H. Lee, Boris Murmann

Figure 1 for Convolutional Neural Networks using Logarithmic Data Representation
Figure 2 for Convolutional Neural Networks using Logarithmic Data Representation
Figure 3 for Convolutional Neural Networks using Logarithmic Data Representation
Figure 4 for Convolutional Neural Networks using Logarithmic Data Representation
Viaarxiv icon