Alert button
Picture for Massoud Pedram

Massoud Pedram

Alert button

Scalable Superconductor Neuron with Ternary Synaptic Connections for Ultra-Fast SNN Hardware

Add code
Bookmark button
Alert button
Feb 27, 2024
Mustafa Altay Karamuftuoglu, Beyza Zeynep Ucpinar, Arash Fayyazi, Sasan Razmkhah, Mehdi Kamal, Massoud Pedram

Viaarxiv icon

Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy

Add code
Bookmark button
Alert button
Feb 08, 2024
Seyedarmin Azizi, Mahdi Nazemi, Massoud Pedram

Viaarxiv icon

Low-Precision Mixed-Computation Models for Inference on Edge

Add code
Bookmark button
Alert button
Dec 03, 2023
Seyedarmin Azizi, Mahdi Nazemi, Mehdi Kamal, Massoud Pedram

Viaarxiv icon

An On-Chip Trainable Neuron Circuit for SFQ-Based Spiking Neural Networks

Add code
Bookmark button
Alert button
Oct 11, 2023
Beyza Zeynep Ucpinar, Mustafa Altay Karamuftuoglu, Sasan Razmkhah, Massoud Pedram

Viaarxiv icon

Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of Deep Neural Networks Through Cluster-Based Tree-Structured Parzen Estimation

Add code
Bookmark button
Alert button
Aug 16, 2023
Seyedarmin Azizi, Mahdi Nazemi, Arash Fayyazi, Massoud Pedram

Viaarxiv icon

Brain Tumor Detection using Convolutional Neural Networks with Skip Connections

Add code
Bookmark button
Alert button
Jul 14, 2023
Aupam Hamran, Marzieh Vaeztourshizi, Amirhossein Esmaili, Massoud Pedram

Viaarxiv icon

SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining

Add code
Bookmark button
Alert button
May 08, 2023
Jung Hwan Heo, Seyedarmin Azizi, Arash Fayyazi, Mahdi Nazemi, Massoud Pedram

Figure 1 for SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining
Figure 2 for SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining
Figure 3 for SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining
Figure 4 for SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining
Viaarxiv icon

A Fast Training-Free Compression Framework for Vision Transformers

Add code
Bookmark button
Alert button
Mar 04, 2023
Jung Hwan Heo, Arash Fayyazi, Mahdi Nazemi, Massoud Pedram

Figure 1 for A Fast Training-Free Compression Framework for Vision Transformers
Figure 2 for A Fast Training-Free Compression Framework for Vision Transformers
Figure 3 for A Fast Training-Free Compression Framework for Vision Transformers
Figure 4 for A Fast Training-Free Compression Framework for Vision Transformers
Viaarxiv icon