Picture for Peijie Dong

Peijie Dong

Multi-Task Domain Adaptation for Language Grounding with 3D Objects

Add code
Jul 03, 2024
Figure 1 for Multi-Task Domain Adaptation for Language Grounding with 3D Objects
Figure 2 for Multi-Task Domain Adaptation for Language Grounding with 3D Objects
Figure 3 for Multi-Task Domain Adaptation for Language Grounding with 3D Objects
Figure 4 for Multi-Task Domain Adaptation for Language Grounding with 3D Objects
Viaarxiv icon

Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models

Add code
Jun 05, 2024
Viaarxiv icon

VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting

Add code
Mar 26, 2024
Figure 1 for VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting
Figure 2 for VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting
Figure 3 for VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting
Figure 4 for VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting
Viaarxiv icon

ParZC: Parametric Zero-Cost Proxies for Efficient NAS

Add code
Feb 03, 2024
Viaarxiv icon

Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery

Add code
Dec 14, 2023
Figure 1 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Figure 2 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Figure 3 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Figure 4 for Auto-Prox: Training-Free Vision Transformer Architecture Search via Automatic Proxy Discovery
Viaarxiv icon

TVT: Training-Free Vision Transformer Search on Tiny Datasets

Add code
Nov 24, 2023
Figure 1 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Figure 2 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Figure 3 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Figure 4 for TVT: Training-Free Vision Transformer Search on Tiny Datasets
Viaarxiv icon

Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models

Add code
Nov 07, 2023
Figure 1 for Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models
Figure 2 for Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models
Figure 3 for Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models
Figure 4 for Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models
Viaarxiv icon

EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization

Add code
Jul 20, 2023
Figure 1 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Figure 2 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Figure 3 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Figure 4 for EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization
Viaarxiv icon

DisWOT: Student Architecture Search for Distillation WithOut Training

Add code
Mar 28, 2023
Figure 1 for DisWOT: Student Architecture Search for Distillation WithOut Training
Figure 2 for DisWOT: Student Architecture Search for Distillation WithOut Training
Figure 3 for DisWOT: Student Architecture Search for Distillation WithOut Training
Figure 4 for DisWOT: Student Architecture Search for Distillation WithOut Training
Viaarxiv icon

RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies

Add code
Jan 24, 2023
Figure 1 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Figure 2 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Figure 3 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Figure 4 for RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies
Viaarxiv icon