Picture for Yanyu Li

Yanyu Li

Efficient Training with Denoised Neural Weights

Add code
Jul 16, 2024
Viaarxiv icon

BitsFusion: 1.99 bits Weight Quantization of Diffusion Model

Add code
Jun 06, 2024
Figure 1 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 2 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 3 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 4 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Viaarxiv icon

SF-V: Single Forward Video Generation Model

Add code
Jun 06, 2024
Viaarxiv icon

TextCraftor: Your Text Encoder Can be Image Quality Controller

Add code
Mar 27, 2024
Figure 1 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Figure 2 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Figure 3 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Figure 4 for TextCraftor: Your Text Encoder Can be Image Quality Controller
Viaarxiv icon

E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Add code
Jan 11, 2024
Viaarxiv icon

HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion

Add code
Oct 12, 2023
Figure 1 for HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
Figure 2 for HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
Figure 3 for HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
Figure 4 for HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
Viaarxiv icon

SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds

Add code
Jun 03, 2023
Figure 1 for SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds
Figure 2 for SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds
Figure 3 for SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds
Figure 4 for SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds
Viaarxiv icon

Rethinking Vision Transformers for MobileNet Size and Speed

Add code
Dec 15, 2022
Figure 1 for Rethinking Vision Transformers for MobileNet Size and Speed
Figure 2 for Rethinking Vision Transformers for MobileNet Size and Speed
Figure 3 for Rethinking Vision Transformers for MobileNet Size and Speed
Figure 4 for Rethinking Vision Transformers for MobileNet Size and Speed
Viaarxiv icon

Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training

Add code
Sep 22, 2022
Figure 1 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Figure 2 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Figure 3 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Figure 4 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Viaarxiv icon

PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems

Add code
Sep 18, 2022
Figure 1 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 2 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 3 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 4 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Viaarxiv icon