Picture for Haoran You

Haoran You

Celine

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Add code
Nov 18, 2022
Figure 1 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 2 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 3 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 4 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Viaarxiv icon

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks

Add code
Oct 24, 2022
Viaarxiv icon

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Oct 18, 2022
Figure 1 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning

Add code
Jul 08, 2022
Figure 1 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 2 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 3 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 4 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Viaarxiv icon

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Add code
May 17, 2022
Figure 1 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 2 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 3 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 4 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Viaarxiv icon

LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference

Add code
Mar 15, 2022
Figure 1 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 2 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 3 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 4 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Viaarxiv icon

I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization

Add code
Mar 07, 2022
Figure 1 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 2 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 3 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 4 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Viaarxiv icon

GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Dec 22, 2021
Figure 1 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency

Add code
Sep 18, 2021
Figure 1 for G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency
Figure 2 for G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency
Figure 3 for G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency
Figure 4 for G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency
Viaarxiv icon

HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark

Add code
Mar 19, 2021
Figure 1 for HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
Figure 2 for HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
Figure 3 for HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
Figure 4 for HW-NAS-Bench:Hardware-Aware Neural Architecture Search Benchmark
Viaarxiv icon