Alert button
Picture for Yingyan Lin

Yingyan Lin

Alert button

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Add code
Bookmark button
Alert button
May 17, 2022
Haoran You, Baopu Li, Huihong Shi, Yonggan Fu, Yingyan Lin

Figure 1 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 2 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 3 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Figure 4 for ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Viaarxiv icon

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?

Add code
Bookmark button
Alert button
Apr 05, 2022
Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin

Figure 1 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 2 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 3 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Figure 4 for Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Viaarxiv icon

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling

Add code
Bookmark button
Alert button
Mar 26, 2022
Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin

Figure 1 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 2 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 3 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 4 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Viaarxiv icon

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Add code
Bookmark button
Alert button
Mar 20, 2022
Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, Yingyan Lin

Figure 1 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 2 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 3 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 4 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Viaarxiv icon

LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference

Add code
Bookmark button
Alert button
Mar 15, 2022
Zhongzhi Yu, Yonggan Fu, Shang Wu, Mengquan Li, Haoran You, Yingyan Lin

Figure 1 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 2 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 3 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Figure 4 for LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference
Viaarxiv icon

I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization

Add code
Bookmark button
Alert button
Mar 07, 2022
Tong Geng, Chunshu Wu, Yongan Zhang, Cheng Tan, Chenhao Xie, Haoran You, Martin C. Herbordt, Yingyan Lin, Ang Li

Figure 1 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 2 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 3 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Figure 4 for I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization
Viaarxiv icon

GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Bookmark button
Alert button
Dec 22, 2021
Haoran You, Tong Geng, Yongan Zhang, Ang Li, Yingyan Lin

Figure 1 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation

Add code
Bookmark button
Alert button
Dec 21, 2021
Zhongzhi Yu, Yonggan Fu, Sicheng Li, Chaojian Li, Yingyan Lin

Figure 1 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 2 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 3 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Figure 4 for MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation
Viaarxiv icon

FBNetV5: Neural Architecture Search for Multiple Tasks in One Run

Add code
Bookmark button
Alert button
Nov 30, 2021
Bichen Wu, Chaojian Li, Hang Zhang, Xiaoliang Dai, Peizhao Zhang, Matthew Yu, Jialiang Wang, Yingyan Lin, Peter Vajda

Figure 1 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Figure 2 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Figure 3 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Figure 4 for FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Viaarxiv icon