Picture for Yingyan Lin

Yingyan Lin

Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning

Add code
Apr 24, 2023
Viaarxiv icon

Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence

Add code
Apr 24, 2023
Figure 1 for Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence
Figure 2 for Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence
Figure 3 for Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence
Figure 4 for Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence
Viaarxiv icon

ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement

Add code
Mar 24, 2023
Viaarxiv icon

INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors

Add code
Dec 05, 2022
Figure 1 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Figure 2 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Figure 3 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Figure 4 for INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors
Viaarxiv icon

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Add code
Nov 18, 2022
Figure 1 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 2 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 3 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 4 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Viaarxiv icon

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Add code
Nov 09, 2022
Viaarxiv icon

Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing

Add code
Nov 02, 2022
Viaarxiv icon

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks

Add code
Oct 24, 2022
Viaarxiv icon

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Add code
Oct 18, 2022
Figure 1 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 2 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 3 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Figure 4 for ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Viaarxiv icon

SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning

Add code
Jul 08, 2022
Figure 1 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 2 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 3 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Figure 4 for SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
Viaarxiv icon