Picture for Yanyu Li

Yanyu Li

E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Add code
Jan 11, 2024
Figure 1 for E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation
Figure 2 for E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation
Figure 3 for E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation
Figure 4 for E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation
Viaarxiv icon

HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion

Add code
Oct 12, 2023
Viaarxiv icon

SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds

Add code
Jun 03, 2023
Viaarxiv icon

Rethinking Vision Transformers for MobileNet Size and Speed

Add code
Dec 15, 2022
Viaarxiv icon

Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training

Add code
Sep 22, 2022
Figure 1 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Figure 2 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Figure 3 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Figure 4 for Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Viaarxiv icon

PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems

Add code
Sep 18, 2022
Figure 1 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 2 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 3 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Figure 4 for PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems
Viaarxiv icon

Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization

Add code
Aug 10, 2022
Figure 1 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 2 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 3 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 4 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Viaarxiv icon

Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

Add code
Jul 25, 2022
Figure 1 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 2 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 3 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 4 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Viaarxiv icon

Real-Time Portrait Stylization on the Edge

Add code
Jun 02, 2022
Figure 1 for Real-Time Portrait Stylization on the Edge
Figure 2 for Real-Time Portrait Stylization on the Edge
Figure 3 for Real-Time Portrait Stylization on the Edge
Figure 4 for Real-Time Portrait Stylization on the Edge
Viaarxiv icon

Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization

Add code
Jun 02, 2022
Figure 1 for Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization
Figure 2 for Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization
Figure 3 for Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization
Figure 4 for Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization
Viaarxiv icon