Alert button
Picture for Song Han

Song Han

Alert button

DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting

Nov 27, 2023
Hanrui Wang, Pengyu Liu, Yilian Liu, Jiaqi Gu, Jonathan Baker, Frederic T. Chong, Song Han

Figure 1 for DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting
Figure 2 for DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting
Figure 3 for DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting
Figure 4 for DGR: Tackling Drifted and Correlated Noise in Quantum Error Correction via Decoding Graph Re-weighting
Viaarxiv icon

RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training

Nov 27, 2023
Hanrui Wang, Yilian Liu, Pengyu Liu, Jiaqi Gu, Zirui Li, Zhiding Liang, Jinglei Cheng, Yongshan Ding, Xuehai Qian, Yiyu Shi, David Z. Pan, Frederic T. Chong, Song Han

Figure 1 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Figure 2 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Figure 3 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Figure 4 for RobustState: Boosting Fidelity of Quantum State Preparation via Noise-Aware Variational Training
Viaarxiv icon

Machine learning's own Industrial Revolution

Nov 04, 2023
Yuan Luo, Song Han, Jingjing Liu

Figure 1 for Machine learning's own Industrial Revolution
Figure 2 for Machine learning's own Industrial Revolution
Figure 3 for Machine learning's own Industrial Revolution
Figure 4 for Machine learning's own Industrial Revolution
Viaarxiv icon

PockEngine: Sparse and Efficient Fine-tuning in a Pocket

Oct 26, 2023
Ligeng Zhu, Lanxiang Hu, Ji Lin, Wei-Chen Wang, Wei-Ming Chen, Chuang Gan, Song Han

Figure 1 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Figure 2 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Figure 3 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Figure 4 for PockEngine: Sparse and Efficient Fine-tuning in a Pocket
Viaarxiv icon

Efficient Streaming Language Models with Attention Sinks

Sep 29, 2023
Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis

Figure 1 for Efficient Streaming Language Models with Attention Sinks
Figure 2 for Efficient Streaming Language Models with Attention Sinks
Figure 3 for Efficient Streaming Language Models with Attention Sinks
Figure 4 for Efficient Streaming Language Models with Attention Sinks
Viaarxiv icon

LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models

Sep 21, 2023
Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, Jiaya Jia

Figure 1 for LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Figure 2 for LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Figure 3 for LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Figure 4 for LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Viaarxiv icon

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

Jun 01, 2023
Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, Song Han

Figure 1 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Figure 2 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Figure 3 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Figure 4 for AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
Viaarxiv icon

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

May 21, 2023
Guangxuan Xiao, Tianwei Yin, William T. Freeman, Frédo Durand, Song Han

Figure 1 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Figure 2 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Figure 3 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Figure 4 for FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
Viaarxiv icon

SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer

Mar 30, 2023
Xuanyao Chen, Zhijian Liu, Haotian Tang, Li Yi, Hang Zhao, Song Han

Figure 1 for SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
Figure 2 for SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
Figure 3 for SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
Figure 4 for SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
Viaarxiv icon

Offsite-Tuning: Transfer Learning without Full Model

Feb 09, 2023
Guangxuan Xiao, Ji Lin, Song Han

Figure 1 for Offsite-Tuning: Transfer Learning without Full Model
Figure 2 for Offsite-Tuning: Transfer Learning without Full Model
Figure 3 for Offsite-Tuning: Transfer Learning without Full Model
Figure 4 for Offsite-Tuning: Transfer Learning without Full Model
Viaarxiv icon