Picture for Bingqian Lu

Bingqian Lu

A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators

Add code
Mar 25, 2022
Figure 1 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Figure 2 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Figure 3 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Figure 4 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Viaarxiv icon

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

Add code
Nov 03, 2021
Figure 1 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Figure 2 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Figure 3 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Figure 4 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Viaarxiv icon

Scaling Up Deep Neural Network Optimization for Edge Inference

Add code
Sep 17, 2020
Figure 1 for Scaling Up Deep Neural Network Optimization for Edge Inference
Viaarxiv icon

A Note on Latency Variability of Deep Neural Networks for Mobile Inference

Add code
Feb 29, 2020
Figure 1 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Figure 2 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Figure 3 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Figure 4 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Viaarxiv icon