Picture for Caiwen Ding

Caiwen Ding

Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs

Add code
Sep 08, 2021
Figure 1 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Figure 2 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Figure 3 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Figure 4 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Viaarxiv icon

Binary Complex Neural Network Acceleration on FPGA

Add code
Aug 10, 2021
Figure 1 for Binary Complex Neural Network Acceleration on FPGA
Figure 2 for Binary Complex Neural Network Acceleration on FPGA
Figure 3 for Binary Complex Neural Network Acceleration on FPGA
Figure 4 for Binary Complex Neural Network Acceleration on FPGA
Viaarxiv icon

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator

Add code
Jun 16, 2021
Figure 1 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 2 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 3 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 4 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Viaarxiv icon

A Compression-Compilation Framework for On-mobile Real-time BERT Applications

Add code
Jun 06, 2021
Figure 1 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 2 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 3 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 4 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Viaarxiv icon

Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices

Add code
Feb 12, 2021
Figure 1 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Figure 2 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Figure 3 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Figure 4 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Viaarxiv icon

A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks

Add code
Dec 18, 2020
Figure 1 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Figure 2 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Figure 3 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Figure 4 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Viaarxiv icon

Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning

Add code
Oct 08, 2020
Figure 1 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 2 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 3 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 4 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Viaarxiv icon

Achieving Real-Time Execution of Transformer-based Large-scale Models on Mobile with Compiler-aware Neural Architecture Optimization

Add code
Sep 15, 2020
Figure 1 for Achieving Real-Time Execution of Transformer-based Large-scale Models on Mobile with Compiler-aware Neural Architecture Optimization
Figure 2 for Achieving Real-Time Execution of Transformer-based Large-scale Models on Mobile with Compiler-aware Neural Architecture Optimization
Figure 3 for Achieving Real-Time Execution of Transformer-based Large-scale Models on Mobile with Compiler-aware Neural Architecture Optimization
Figure 4 for Achieving Real-Time Execution of Transformer-based Large-scale Models on Mobile with Compiler-aware Neural Architecture Optimization
Viaarxiv icon

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Add code
Sep 14, 2020
Figure 1 for SAPAG: A Self-Adaptive Privacy Attack From Gradients
Figure 2 for SAPAG: A Self-Adaptive Privacy Attack From Gradients
Figure 3 for SAPAG: A Self-Adaptive Privacy Attack From Gradients
Figure 4 for SAPAG: A Self-Adaptive Privacy Attack From Gradients
Viaarxiv icon

ESMFL: Efficient and Secure Models for Federated Learning

Add code
Sep 03, 2020
Figure 1 for ESMFL: Efficient and Secure Models for Federated Learning
Figure 2 for ESMFL: Efficient and Secure Models for Federated Learning
Figure 3 for ESMFL: Efficient and Secure Models for Federated Learning
Figure 4 for ESMFL: Efficient and Secure Models for Federated Learning
Viaarxiv icon