Alert button
Picture for Ji Lin

Ji Lin

Alert button

Tiny Machine Learning: Progress and Futures

Add code
Bookmark button
Alert button
Mar 29, 2024
Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Song Han

Viaarxiv icon

VILA: On Pre-training for Visual Language Models

Add code
Bookmark button
Alert button
Dec 14, 2023
Ji Lin, Hongxu Yin, Wei Ping, Yao Lu, Pavlo Molchanov, Andrew Tao, Huizi Mao, Jan Kautz, Mohammad Shoeybi, Song Han

Figure 1 for VILA: On Pre-training for Visual Language Models
Figure 2 for VILA: On Pre-training for Visual Language Models
Figure 3 for VILA: On Pre-training for Visual Language Models
Figure 4 for VILA: On Pre-training for Visual Language Models
Viaarxiv icon

PockEngine: Sparse and Efficient Fine-tuning in a Pocket

Add code
Bookmark button
Alert button
Oct 26, 2023
Ligeng Zhu, Lanxiang Hu, Ji Lin, Wei-Chen Wang, Wei-Ming Chen, Chuang Gan, Song Han

Figure 1 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Figure 2 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Figure 3 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Figure 4 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Viaarxiv icon

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

Add code
Bookmark button
Alert button
Jun 01, 2023
Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, Song Han

Figure 1 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Figure 2 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Figure 3 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Figure 4 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Viaarxiv icon

Offsite-Tuning: Transfer Learning without Full Model

Add code
Bookmark button
Alert button
Feb 09, 2023
Guangxuan Xiao, Ji Lin, Song Han

Figure 1 for Offsite-Tuning: Transfer Learning without Full Model
Figure 2 for Offsite-Tuning: Transfer Learning without Full Model
Figure 3 for Offsite-Tuning: Transfer Learning without Full Model
Figure 4 for Offsite-Tuning: Transfer Learning without Full Model
Viaarxiv icon

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

Add code
Bookmark button
Alert button
Nov 28, 2022
Guangxuan Xiao, Ji Lin, Mickael Seznec, Julien Demouth, Song Han

Figure 1 for SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Figure 2 for SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Figure 3 for SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Figure 4 for SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Viaarxiv icon

Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models

Add code
Bookmark button
Alert button
Nov 15, 2022
Muyang Li, Ji Lin, Chenlin Meng, Stefano Ermon, Song Han, Jun-Yan Zhu

Figure 1 for Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Figure 2 for Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Figure 3 for Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Figure 4 for Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Viaarxiv icon

On-Device Training Under 256KB Memory

Add code
Bookmark button
Alert button
Jul 14, 2022
Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, Song Han

Figure 1 for On-Device Training Under 256KB Memory
Figure 2 for On-Device Training Under 256KB Memory
Figure 3 for On-Device Training Under 256KB Memory
Figure 4 for On-Device Training Under 256KB Memory
Viaarxiv icon