Picture for Zechun Liu

Zechun Liu

Efficient Quantization-aware Training with Adaptive Coreset Selection

Add code
Jun 12, 2023
Viaarxiv icon

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Add code
Jun 08, 2023
Figure 1 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 2 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 3 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 4 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Viaarxiv icon

Binary and Ternary Natural Language Generation

Add code
Jun 02, 2023
Figure 1 for Binary and Ternary Natural Language Generation
Figure 2 for Binary and Ternary Natural Language Generation
Figure 3 for Binary and Ternary Natural Language Generation
Figure 4 for Binary and Ternary Natural Language Generation
Viaarxiv icon

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

Add code
May 29, 2023
Figure 1 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 2 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 3 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 4 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Viaarxiv icon

EBSR: Enhanced Binary Neural Network for Image Super-Resolution

Add code
Mar 22, 2023
Viaarxiv icon

Oscillation-free Quantization for Low-bit Vision Transformers

Add code
Feb 04, 2023
Viaarxiv icon

SDQ: Stochastic Differentiable Quantization with Mixed Precision

Add code
Jun 17, 2022
Figure 1 for SDQ: Stochastic Differentiable Quantization with Mixed Precision
Figure 2 for SDQ: Stochastic Differentiable Quantization with Mixed Precision
Figure 3 for SDQ: Stochastic Differentiable Quantization with Mixed Precision
Figure 4 for SDQ: Stochastic Differentiable Quantization with Mixed Precision
Viaarxiv icon

BiT: Robustly Binarized Multi-distilled Transformer

Add code
May 25, 2022
Figure 1 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 2 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 3 for BiT: Robustly Binarized Multi-distilled Transformer
Figure 4 for BiT: Robustly Binarized Multi-distilled Transformer
Viaarxiv icon

Stereo Neural Vernier Caliper

Add code
Mar 26, 2022
Figure 1 for Stereo Neural Vernier Caliper
Figure 2 for Stereo Neural Vernier Caliper
Figure 3 for Stereo Neural Vernier Caliper
Figure 4 for Stereo Neural Vernier Caliper
Viaarxiv icon

Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space

Add code
Jan 03, 2022
Figure 1 for Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Figure 2 for Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Figure 3 for Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Figure 4 for Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Viaarxiv icon