Picture for Yingyan Lin

Yingyan Lin

DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks

Add code
Jun 02, 2022
Figure 1 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 2 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 3 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 4 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Viaarxiv icon

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Add code
May 17, 2022
Figure 1 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 2 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 3 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 4 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Viaarxiv icon

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?

Add code
Apr 05, 2022
Figure 1 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 2 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 3 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 4 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Viaarxiv icon

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling

Add code
Mar 26, 2022
Figure 1 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 2 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 3 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 4 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Viaarxiv icon

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Add code
Mar 20, 2022
Figure 1 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 2 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 3 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 4 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Viaarxiv icon

LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference

Add code
Mar 15, 2022
Figure 1 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 2 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 3 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 4 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Viaarxiv icon

I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization

Add code
Mar 07, 2022
Figure 1 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 2 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 3 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 4 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Viaarxiv icon

GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Dec 22, 2021
Figure 1 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation

Add code
Dec 21, 2021
Figure 1 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 2 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 3 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 4 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Viaarxiv icon

FBNetV5: Neural Architecture Search for Multiple Tasks in One Run

Add code
Nov 30, 2021
Figure 1 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Figure 2 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Figure 3 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Figure 4 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Viaarxiv icon