Picture for Lujun Li

Lujun Li

Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models

Add code
Jun 05, 2024
Viaarxiv icon

ParZC: Parametric Zero-Cost Proxies for Efficient NAS

Add code
Feb 03, 2024
Viaarxiv icon

Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery

Add code
Dec 14, 2023
Figure 1 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Figure 2 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Figure 3 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Figure 4 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Viaarxiv icon

TVT: Training-Free Vision Transformer Search on Tiny Datasets

Add code
Nov 24, 2023
Figure 1 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Figure 2 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Figure 3 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Figure 4 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Viaarxiv icon

EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization

Add code
Jul 20, 2023
Figure 1 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Figure 2 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Figure 3 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Figure 4 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Viaarxiv icon

NORM: Knowledge Distillation via N-to-One Representation Matching

Add code
May 23, 2023
Figure 1 for NORM: Knowledge Distillation via N-to-One Representation Matching
Figure 2 for NORM: Knowledge Distillation via N-to-One Representation Matching
Figure 3 for NORM: Knowledge Distillation via N-to-One Representation Matching
Figure 4 for NORM: Knowledge Distillation via N-to-One Representation Matching
Viaarxiv icon

Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling

Add code
May 21, 2023
Figure 1 for Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling
Figure 2 for Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling
Figure 3 for Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling
Figure 4 for Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling
Viaarxiv icon

DisWOT: Student Architecture Search for Distillation WithOut Training

Add code
Mar 28, 2023
Figure 1 for DisWOT: Student Architecture Search for Distillation WithOut Training
Figure 2 for DisWOT: Student Architecture Search for Distillation WithOut Training
Figure 3 for DisWOT: Student Architecture Search for Distillation WithOut Training
Figure 4 for DisWOT: Student Architecture Search for Distillation WithOut Training
Viaarxiv icon

RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies

Add code
Jan 24, 2023
Figure 1 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Figure 2 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Figure 3 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Figure 4 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Viaarxiv icon

Progressive Meta-Pooling Learning for Lightweight Image Classification Model

Add code
Jan 24, 2023
Figure 1 for Progressive Meta-Pooling Learning for Lightweight Image Classification Model
Figure 2 for Progressive Meta-Pooling Learning for Lightweight Image Classification Model
Figure 3 for Progressive Meta-Pooling Learning for Lightweight Image Classification Model
Figure 4 for Progressive Meta-Pooling Learning for Lightweight Image Classification Model
Viaarxiv icon