Alert button
Picture for Grace Li Zhang

Grace Li Zhang

Alert button

EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural Network Acceleration

Add code
Bookmark button
Alert button
Feb 25, 2024
Bo Liu, Grace Li Zhang, Xunzhao Yin, Ulf Schlichtmann, Bing Li

Viaarxiv icon

Class-Aware Pruning for Efficient Neural Networks

Add code
Bookmark button
Alert button
Dec 10, 2023
Mengnan Jiang, Jingcun Wang, Amro Eldebiky, Xunzhao Yin, Cheng Zhuo, Ing-Chao Lin, Grace Li Zhang

Viaarxiv icon

Early Classification for Dynamic Inference of Neural Networks

Add code
Bookmark button
Alert button
Sep 23, 2023
Jingcun Wang, Bing Li, Grace Li Zhang

Viaarxiv icon

Logic Design of Neural Networks for High-Throughput and Low-Power Applications

Add code
Bookmark button
Alert button
Sep 19, 2023
Kangwei Xu, Grace Li Zhang, Ulf Schlichtmann, Bing Li

Figure 1 for Logic Design of Neural Networks for High-Throughput and Low-Power Applications
Figure 2 for Logic Design of Neural Networks for High-Throughput and Low-Power Applications
Figure 3 for Logic Design of Neural Networks for High-Throughput and Low-Power Applications
Figure 4 for Logic Design of Neural Networks for High-Throughput and Low-Power Applications
Viaarxiv icon

Expressivity Enhancement with Efficient Quadratic Neurons for Convolutional Neural Networks

Add code
Bookmark button
Alert button
Jun 10, 2023
Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li

Figure 1 for Expressivity Enhancement with Efficient Quadratic Neurons for Convolutional Neural Networks
Figure 2 for Expressivity Enhancement with Efficient Quadratic Neurons for Convolutional Neural Networks
Figure 3 for Expressivity Enhancement with Efficient Quadratic Neurons for Convolutional Neural Networks
Figure 4 for Expressivity Enhancement with Efficient Quadratic Neurons for Convolutional Neural Networks
Viaarxiv icon

PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration

Add code
Bookmark button
Alert button
Mar 24, 2023
Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li

Figure 1 for PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration
Figure 2 for PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration
Figure 3 for PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration
Figure 4 for PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration
Viaarxiv icon

Class-based Quantization for Neural Networks

Add code
Bookmark button
Alert button
Nov 27, 2022
Wenhao Sun, Grace Li Zhang, Huaxi Gu, Bing Li, Ulf Schlichtmann

Figure 1 for Class-based Quantization for Neural Networks
Figure 2 for Class-based Quantization for Neural Networks
Figure 3 for Class-based Quantization for Neural Networks
Figure 4 for Class-based Quantization for Neural Networks
Viaarxiv icon

SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement

Add code
Bookmark button
Alert button
Nov 27, 2022
Wenhao Sun, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Huaxi Gu, Bing Li, Ulf Schlichtmann

Figure 1 for SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement
Figure 2 for SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement
Figure 3 for SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement
Figure 4 for SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement
Viaarxiv icon

CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation

Add code
Bookmark button
Alert button
Nov 27, 2022
Amro Eldebiky, Grace Li Zhang, Georg Boecherer, Bing Li, Ulf Schlichtmann

Figure 1 for CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation
Figure 2 for CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation
Figure 3 for CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation
Figure 4 for CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation
Viaarxiv icon