Picture for Hanqiu Chen

Hanqiu Chen

Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation

Add code
Aug 10, 2024
Figure 1 for Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation
Figure 2 for Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation
Figure 3 for Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation
Figure 4 for Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation
Viaarxiv icon

HLSFactory: A Framework Empowering High-Level Synthesis Datasets for Machine Learning and Beyond

Add code
May 01, 2024
Viaarxiv icon

Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation

Add code
Jun 29, 2023
Viaarxiv icon

DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference

Add code
Apr 13, 2023
Figure 1 for DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference
Figure 2 for DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference
Figure 3 for DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference
Figure 4 for DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference
Viaarxiv icon

Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU

Add code
Oct 08, 2022
Figure 1 for Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU
Figure 2 for Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU
Figure 3 for Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU
Figure 4 for Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU
Viaarxiv icon