Alert button
Picture for Bingqian Lu

Bingqian Lu

Alert button

A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators

Add code
Bookmark button
Alert button
Mar 25, 2022
Bingqian Lu, Zheyu Yan, Yiyu Shi, Shaolei Ren

Figure 1 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Figure 2 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Figure 3 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Figure 4 for A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators
Viaarxiv icon

One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search

Add code
Bookmark button
Alert button
Nov 03, 2021
Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, Shaolei Ren

Figure 1 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Figure 2 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Figure 3 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Figure 4 for One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Viaarxiv icon

Scaling Up Deep Neural Network Optimization for Edge Inference

Add code
Bookmark button
Alert button
Sep 17, 2020
Bingqian Lu, Jianyi Yang, Shaolei Ren

Figure 1 for Scaling Up Deep Neural Network Optimization for Edge Inference
Viaarxiv icon

A Note on Latency Variability of Deep Neural Networks for Mobile Inference

Add code
Bookmark button
Alert button
Feb 29, 2020
Luting Yang, Bingqian Lu, Shaolei Ren

Figure 1 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Figure 2 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Figure 3 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Figure 4 for A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Viaarxiv icon