Picture for Amir Gholami

Amir Gholami

UC Berkeley/LBNL/ICSI

Big Little Transformer Decoder

Add code
Feb 15, 2023
Viaarxiv icon

Adaptive Self-supervision Algorithms for Physics-informed Neural Networks

Add code
Jul 08, 2022
Figure 1 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Figure 2 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Figure 3 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Figure 4 for Adaptive Self-supervision Algorithms for Physics-informed Neural Networks
Viaarxiv icon

Squeezeformer: An Efficient Transformer for Automatic Speech Recognition

Add code
Jun 02, 2022
Figure 1 for Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Figure 2 for Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Figure 3 for Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Figure 4 for Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Viaarxiv icon

Applications and Techniques for Fast Machine Learning in Science

Add code
Oct 25, 2021
Figure 1 for Applications and Techniques for Fast Machine Learning in Science
Figure 2 for Applications and Techniques for Fast Machine Learning in Science
Figure 3 for Applications and Techniques for Fast Machine Learning in Science
Figure 4 for Applications and Techniques for Fast Machine Learning in Science
Viaarxiv icon

Characterizing possible failure modes in physics-informed neural networks

Add code
Sep 02, 2021
Figure 1 for Characterizing possible failure modes in physics-informed neural networks
Figure 2 for Characterizing possible failure modes in physics-informed neural networks
Figure 3 for Characterizing possible failure modes in physics-informed neural networks
Figure 4 for Characterizing possible failure modes in physics-informed neural networks
Viaarxiv icon

Learned Token Pruning for Transformers

Add code
Jul 02, 2021
Figure 1 for Learned Token Pruning for Transformers
Figure 2 for Learned Token Pruning for Transformers
Figure 3 for Learned Token Pruning for Transformers
Figure 4 for Learned Token Pruning for Transformers
Viaarxiv icon

Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition

Add code
Mar 31, 2021
Figure 1 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Figure 2 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Figure 3 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Figure 4 for Q-ASR: Integer-only Zero-shot Quantization for Efficient Speech Recognition
Viaarxiv icon

A Survey of Quantization Methods for Efficient Neural Network Inference

Add code
Mar 25, 2021
Figure 1 for A Survey of Quantization Methods for Efficient Neural Network Inference
Figure 2 for A Survey of Quantization Methods for Efficient Neural Network Inference
Figure 3 for A Survey of Quantization Methods for Efficient Neural Network Inference
Figure 4 for A Survey of Quantization Methods for Efficient Neural Network Inference
Viaarxiv icon

I-BERT: Integer-only BERT Quantization

Add code
Feb 11, 2021
Figure 1 for I-BERT: Integer-only BERT Quantization
Figure 2 for I-BERT: Integer-only BERT Quantization
Figure 3 for I-BERT: Integer-only BERT Quantization
Figure 4 for I-BERT: Integer-only BERT Quantization
Viaarxiv icon

Hessian-Aware Pruning and Optimal Neural Implant

Add code
Feb 06, 2021
Figure 1 for Hessian-Aware Pruning and Optimal Neural Implant
Figure 2 for Hessian-Aware Pruning and Optimal Neural Implant
Figure 3 for Hessian-Aware Pruning and Optimal Neural Implant
Figure 4 for Hessian-Aware Pruning and Optimal Neural Implant
Viaarxiv icon