Picture for Fengbin Tu

Fengbin Tu

A 28nm 0.22 μJ/token memory-compute-intensity-aware CNN-Transformer accelerator with hybrid-attention-based layer-fusion and cascaded pruning for semanticsegmentation

Add code
Dec 19, 2025
Viaarxiv icon

A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications

Add code
Jul 19, 2023
Viaarxiv icon

DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

Add code
Feb 24, 2023
Viaarxiv icon

H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

Add code
Jul 25, 2021
Figure 1 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 2 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 3 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 4 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Viaarxiv icon