Picture for Ao Ren

Ao Ren

$D^2Prune$: Sparsifying Large Language Models via Dual Taylor Expansion and Attention Distribution Awareness

Add code
Jan 14, 2026
Viaarxiv icon

Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI

Add code
Jun 18, 2021
Figure 1 for Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI
Figure 2 for Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI
Figure 3 for Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI
Figure 4 for Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI
Viaarxiv icon

CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework

Add code
Apr 16, 2021
Figure 1 for CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework
Figure 2 for CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework
Figure 3 for CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework
Figure 4 for CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework
Viaarxiv icon

FedSAE: A Novel Self-Adaptive Federated Learning Framework in Heterogeneous Systems

Add code
Apr 15, 2021
Figure 1 for FedSAE: A Novel Self-Adaptive Federated Learning Framework in Heterogeneous Systems
Figure 2 for FedSAE: A Novel Self-Adaptive Federated Learning Framework in Heterogeneous Systems
Figure 3 for FedSAE: A Novel Self-Adaptive Federated Learning Framework in Heterogeneous Systems
Figure 4 for FedSAE: A Novel Self-Adaptive Federated Learning Framework in Heterogeneous Systems
Viaarxiv icon

DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks

Add code
Nov 20, 2019
Figure 1 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Figure 2 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Figure 3 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Figure 4 for DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks
Viaarxiv icon

A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology

Add code
Jul 22, 2019
Figure 1 for A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology
Figure 2 for A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology
Figure 3 for A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology
Figure 4 for A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron SuperconductingTechnology
Viaarxiv icon

ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of Multipliers

Add code
Dec 31, 2018
Figure 1 for ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of Multipliers
Figure 2 for ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of Multipliers
Figure 3 for ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of Multipliers
Figure 4 for ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of Multipliers
Viaarxiv icon

Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing

Add code
May 10, 2018
Figure 1 for Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing
Figure 2 for Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing
Figure 3 for Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing
Figure 4 for Towards Budget-Driven Hardware Optimization for Deep Convolutional Neural Networks using Stochastic Computing
Viaarxiv icon

Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs

Add code
Mar 28, 2018
Figure 1 for Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs
Figure 2 for Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs
Figure 3 for Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs
Figure 4 for Structured Weight Matrices-Based Hardware Accelerators in Deep Neural Networks: FPGAs and ASICs
Viaarxiv icon

An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing

Add code
Feb 03, 2018
Figure 1 for An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing
Figure 2 for An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing
Figure 3 for An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing
Figure 4 for An Area and Energy Efficient Design of Domain-Wall Memory-Based Deep Convolutional Neural Networks using Stochastic Computing
Viaarxiv icon