Picture for Fengbin Tu

Fengbin Tu

A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications

Add code
Jul 19, 2023
Figure 1 for A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications
Figure 2 for A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications
Figure 3 for A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications
Viaarxiv icon

DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

Add code
Feb 24, 2023
Figure 1 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 2 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 3 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Figure 4 for DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference
Viaarxiv icon

H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

Add code
Jul 25, 2021
Figure 1 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 2 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 3 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 4 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Viaarxiv icon