Alert button
Picture for Dibakar Gope

Dibakar Gope

Alert button

Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers

Aug 21, 2023
Natalia Frumkin, Dibakar Gope, Diana Marculescu

Figure 1 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Figure 2 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Figure 3 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Figure 4 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Viaarxiv icon

PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices

Jan 26, 2023
Yuji Chai, Devashree Tripathy, Chuteng Zhou, Dibakar Gope, Igor Fedorov, Ramon Matas, David Brooks, Gu-Yeon Wei, Paul Whatmough

Figure 1 for PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices
Figure 2 for PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices
Figure 3 for PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices
Figure 4 for PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices
Viaarxiv icon

CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers

Nov 17, 2022
Natalia Frumkin, Dibakar Gope, Diana Marculescu

Figure 1 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Figure 2 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Figure 3 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Figure 4 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Viaarxiv icon

Restructurable Activation Networks

Aug 17, 2022
Kartikeya Bhardwaj, James Ward, Caleb Tung, Dibakar Gope, Lingchuan Meng, Igor Fedorov, Alex Chalfin, Paul Whatmough, Danny Loh

Figure 1 for Restructurable Activation Networks
Figure 2 for Restructurable Activation Networks
Figure 3 for Restructurable Activation Networks
Figure 4 for Restructurable Activation Networks
Viaarxiv icon

Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge

Dec 29, 2021
Kartikeya Bhardwaj, Dibakar Gope, James Ward, Paul Whatmough, Danny Loh

Figure 1 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Figure 2 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Figure 3 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Figure 4 for Super-Efficient Super Resolution for Fast Adversarial Defense at the Edge
Viaarxiv icon

Collapsible Linear Blocks for Super-Efficient Super Resolution

Mar 17, 2021
Kartikeya Bhardwaj, Milos Milosavljevic, Alex Chalfin, Naveen Suda, Liam O'Neil, Dibakar Gope, Lingchuan Meng, Ramon Matas, Danny Loh

Figure 1 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 2 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 3 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Figure 4 for Collapsible Linear Blocks for Super-Efficient Super Resolution
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Oct 25, 2020
Colby Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough

Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Rank and run-time aware compression of NLP Applications

Oct 06, 2020
Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Rank and run-time aware compression of NLP Applications
Figure 2 for Rank and run-time aware compression of NLP Applications
Figure 3 for Rank and run-time aware compression of NLP Applications
Viaarxiv icon

High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands

Aug 03, 2020
Dibakar Gope, Jesse Beu, Matthew Mattina

Figure 1 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 2 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 3 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 4 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Viaarxiv icon