Alert button
Picture for Yingyan Lin

Yingyan Lin

Alert button

Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning

Add code
Bookmark button
Alert button
Apr 24, 2023
Yonggan Fu, Ye Yuan, Shang Wu, Jiayi Yuan, Yingyan Lin

Figure 1 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Figure 2 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Figure 3 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Figure 4 for Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning
Viaarxiv icon

ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement

Add code
Bookmark button
Alert button
Mar 24, 2023
Chaojian Li, Wenwan Chen, Jiayi Yuan, Yingyan Lin, Ashutosh Sabharwal

Figure 1 for ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Figure 2 for ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Figure 3 for ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Figure 4 for ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Viaarxiv icon

INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors

Add code
Bookmark button
Alert button
Dec 05, 2022
Chaojian Li, Bichen Wu, Albert Pumarola, Peizhao Zhang, Yingyan Lin, Peter Vajda

Figure 1 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Figure 2 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Figure 3 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Figure 4 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Viaarxiv icon

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Add code
Bookmark button
Alert button
Nov 18, 2022
Haoran You, Yunyang Xiong, Xiaoliang Dai, Bichen Wu, Peizhao Zhang, Haoqi Fan, Peter Vajda, Yingyan Lin

Figure 1 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 2 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 3 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 4 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Viaarxiv icon

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Add code
Bookmark button
Alert button
Nov 09, 2022
Jyotikrishna Dass, Shang Wu, Huihong Shi, Chaojian Li, Zhifan Ye, Zhongfeng Wang, Yingyan Lin

Figure 1 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 2 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 3 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Figure 4 for ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Viaarxiv icon

Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing

Add code
Bookmark button
Alert button
Nov 02, 2022
Yonggan Fu, Yang Zhang, Kaizhi Qian, Zhifan Ye, Zhongzhi Yu, Cheng-I Lai, Yingyan Lin

Figure 1 for Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Figure 2 for Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Figure 3 for Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Figure 4 for Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Viaarxiv icon

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks

Add code
Bookmark button
Alert button
Oct 24, 2022
Huihong Shi, Haoran You, Yang Zhao, Zhongfeng Wang, Yingyan Lin

Figure 1 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Figure 2 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Figure 3 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Figure 4 for NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
Viaarxiv icon

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Bookmark button
Alert button
Oct 18, 2022
Haoran You, Zhanyi Sun, Huihong Shi, Zhongzhi Yu, Yang Zhao, Yongan Zhang, Chaojian Li, Baopu Li, Yingyan Lin

Figure 1 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning

Add code
Bookmark button
Alert button
Jul 08, 2022
Haoran You, Baopu Li, Zhanyi Sun, Xu Ouyang, Yingyan Lin

Figure 1 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 2 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 3 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 4 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Viaarxiv icon

DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks

Add code
Bookmark button
Alert button
Jun 02, 2022
Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman Krishnamoorthi, Vikas Chandra, Yingyan Lin

Figure 1 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 2 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 3 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Figure 4 for DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Viaarxiv icon