Picture for Caiwen Ding

Caiwen Ding

Katie

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Add code
Oct 18, 2021
Figure 1 for Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Figure 2 for Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Figure 3 for Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Figure 4 for Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Viaarxiv icon

Detecting Gender Bias in Transformer-based Models: A Case Study on BERT

Add code
Oct 15, 2021
Figure 1 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Figure 2 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Figure 3 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Figure 4 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Viaarxiv icon

Dr. Top-k: Delegate-Centric Top-k on GPUs

Add code
Sep 16, 2021
Figure 1 for Dr. Top-k: Delegate-Centric Top-k on GPUs
Figure 2 for Dr. Top-k: Delegate-Centric Top-k on GPUs
Figure 3 for Dr. Top-k: Delegate-Centric Top-k on GPUs
Figure 4 for Dr. Top-k: Delegate-Centric Top-k on GPUs
Viaarxiv icon

Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs

Add code
Sep 08, 2021
Figure 1 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Figure 2 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Figure 3 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Figure 4 for Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs
Viaarxiv icon

Binary Complex Neural Network Acceleration on FPGA

Add code
Aug 10, 2021
Figure 1 for Binary Complex Neural Network Acceleration on FPGA
Figure 2 for Binary Complex Neural Network Acceleration on FPGA
Figure 3 for Binary Complex Neural Network Acceleration on FPGA
Figure 4 for Binary Complex Neural Network Acceleration on FPGA
Viaarxiv icon

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator

Add code
Jun 16, 2021
Figure 1 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 2 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 3 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Figure 4 for FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Viaarxiv icon

A Compression-Compilation Framework for On-mobile Real-time BERT Applications

Add code
Jun 06, 2021
Figure 1 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 2 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 3 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Figure 4 for A Compression-Compilation Framework for On-mobile Real-time BERT Applications
Viaarxiv icon

Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices

Add code
Feb 12, 2021
Figure 1 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Figure 2 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Figure 3 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Figure 4 for Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Viaarxiv icon

A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks

Add code
Dec 18, 2020
Figure 1 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Figure 2 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Figure 3 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Figure 4 for A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks
Viaarxiv icon

Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning

Add code
Oct 08, 2020
Figure 1 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 2 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 3 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Figure 4 for Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Viaarxiv icon