Picture for Mohamed S. Abdelfattah

Mohamed S. Abdelfattah

Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs

Add code
May 06, 2024
Figure 1 for Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Figure 2 for Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Figure 3 for Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Figure 4 for Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Viaarxiv icon

Encodings for Prediction-based Neural Architecture Search

Add code
Mar 04, 2024
Figure 1 for Encodings for Prediction-based Neural Architecture Search
Figure 2 for Encodings for Prediction-based Neural Architecture Search
Figure 3 for Encodings for Prediction-based Neural Architecture Search
Figure 4 for Encodings for Prediction-based Neural Architecture Search
Viaarxiv icon

On Latency Predictors for Neural Architecture Search

Add code
Mar 04, 2024
Figure 1 for On Latency Predictors for Neural Architecture Search
Figure 2 for On Latency Predictors for Neural Architecture Search
Figure 3 for On Latency Predictors for Neural Architecture Search
Figure 4 for On Latency Predictors for Neural Architecture Search
Viaarxiv icon

Fast Inference Through The Reuse Of Attention Maps In Diffusion Models

Add code
Dec 13, 2023
Figure 1 for Fast Inference Through The Reuse Of Attention Maps In Diffusion Models
Figure 2 for Fast Inference Through The Reuse Of Attention Maps In Diffusion Models
Figure 3 for Fast Inference Through The Reuse Of Attention Maps In Diffusion Models
Figure 4 for Fast Inference Through The Reuse Of Attention Maps In Diffusion Models
Viaarxiv icon

FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search

Add code
Aug 07, 2023
Figure 1 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 2 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 3 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 4 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Viaarxiv icon

DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms

Add code
Aug 02, 2023
Figure 1 for DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
Figure 2 for DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
Figure 3 for DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
Figure 4 for DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
Viaarxiv icon

Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search

Add code
Jun 04, 2023
Figure 1 for Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search
Figure 2 for Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search
Figure 3 for Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search
Figure 4 for Multi-Predict: Few Shot Predictors For Efficient Neural Architecture Search
Viaarxiv icon

Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design

Add code
Sep 20, 2022
Figure 1 for Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Figure 2 for Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Figure 3 for Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Figure 4 for Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
Viaarxiv icon

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

Add code
Jan 02, 2022
Figure 1 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 2 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 3 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 4 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Viaarxiv icon

Temporal Kernel Consistency for Blind Video Super-Resolution

Add code
Aug 18, 2021
Figure 1 for Temporal Kernel Consistency for Blind Video Super-Resolution
Figure 2 for Temporal Kernel Consistency for Blind Video Super-Resolution
Figure 3 for Temporal Kernel Consistency for Blind Video Super-Resolution
Figure 4 for Temporal Kernel Consistency for Blind Video Super-Resolution
Viaarxiv icon