Picture for Zhewei Yao

Zhewei Yao

How Much Can CLIP Benefit Vision-and-Language Tasks?

Add code
Jul 13, 2021
Figure 1 for How Much Can CLIP Benefit Vision-and-Language Tasks?
Figure 2 for How Much Can CLIP Benefit Vision-and-Language Tasks?
Figure 3 for How Much Can CLIP Benefit Vision-and-Language Tasks?
Figure 4 for How Much Can CLIP Benefit Vision-and-Language Tasks?
Viaarxiv icon

MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models

Add code
May 30, 2021
Figure 1 for MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models
Figure 2 for MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models
Figure 3 for MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models
Figure 4 for MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models
Viaarxiv icon

ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

Add code
Apr 29, 2021
Figure 1 for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
Figure 2 for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
Figure 3 for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
Figure 4 for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
Viaarxiv icon

Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition

Add code
Mar 31, 2021
Figure 1 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Figure 2 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Figure 3 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Figure 4 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Viaarxiv icon

A Survey of Quantization Methods for Efficient Neural Network Inference

Add code
Mar 25, 2021
Figure 1 for A Survey of Quantization Methods for Efficient Neural Network Inference
Figure 2 for A Survey of Quantization Methods for Efficient Neural Network Inference
Figure 3 for A Survey of Quantization Methods for Efficient Neural Network Inference
Figure 4 for A Survey of Quantization Methods for Efficient Neural Network Inference
Viaarxiv icon

I-BERT: Integer-only BERT Quantization

Add code
Feb 11, 2021
Figure 1 for I-BERT: Integer-only BERT Quantization
Figure 2 for I-BERT: Integer-only BERT Quantization
Figure 3 for I-BERT: Integer-only BERT Quantization
Figure 4 for I-BERT: Integer-only BERT Quantization
Viaarxiv icon

Hessian-Aware Pruning and Optimal Neural Implant

Add code
Feb 06, 2021
Figure 1 for Hessian-Aware Pruning and Optimal Neural Implant
Figure 2 for Hessian-Aware Pruning and Optimal Neural Implant
Figure 3 for Hessian-Aware Pruning and Optimal Neural Implant
Figure 4 for Hessian-Aware Pruning and Optimal Neural Implant
Viaarxiv icon

HAWQV3: Dyadic Neural Network Quantization

Add code
Nov 20, 2020
Figure 1 for HAWQV3: Dyadic Neural Network Quantization
Figure 2 for HAWQV3: Dyadic Neural Network Quantization
Figure 3 for HAWQV3: Dyadic Neural Network Quantization
Figure 4 for HAWQV3: Dyadic Neural Network Quantization
Viaarxiv icon

A Statistical Framework for Low-bitwidth Training of Deep Neural Networks

Add code
Oct 27, 2020
Figure 1 for A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Figure 2 for A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Figure 3 for A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Figure 4 for A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Viaarxiv icon

MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding

Add code
Oct 12, 2020
Figure 1 for MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding
Figure 2 for MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding
Figure 3 for MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding
Figure 4 for MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding
Viaarxiv icon