Alert button
Picture for Natalia Frumkin

Natalia Frumkin

Alert button

Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers

Add code
Bookmark button
Alert button
Aug 21, 2023
Natalia Frumkin, Dibakar Gope, Diana Marculescu

Figure 1 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Figure 2 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Figure 3 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Figure 4 for Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers
Viaarxiv icon

MobileTL: On-device Transfer Learning with Inverted Residual Blocks

Add code
Bookmark button
Alert button
Dec 05, 2022
Hung-Yueh Chiang, Natalia Frumkin, Feng Liang, Diana Marculescu

Figure 1 for MobileTL: On-device Transfer Learning with Inverted Residual Blocks
Figure 2 for MobileTL: On-device Transfer Learning with Inverted Residual Blocks
Figure 3 for MobileTL: On-device Transfer Learning with Inverted Residual Blocks
Figure 4 for MobileTL: On-device Transfer Learning with Inverted Residual Blocks
Viaarxiv icon

CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers

Add code
Bookmark button
Alert button
Nov 17, 2022
Natalia Frumkin, Dibakar Gope, Diana Marculescu

Figure 1 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Figure 2 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Figure 3 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Figure 4 for CPT-V: A Contrastive Approach to Post-Training Quantization of Vision Transformers
Viaarxiv icon