Picture for Hamed Pirsiavash

Hamed Pirsiavash

University of Maryland Baltimore County

Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization

Add code
Nov 30, 2023
Viaarxiv icon

BrainWash: A Poisoning Attack to Forget in Continual Learning

Add code
Nov 24, 2023
Figure 1 for BrainWash: A Poisoning Attack to Forget in Continual Learning
Figure 2 for BrainWash: A Poisoning Attack to Forget in Continual Learning
Figure 3 for BrainWash: A Poisoning Attack to Forget in Continual Learning
Figure 4 for BrainWash: A Poisoning Attack to Forget in Continual Learning
Viaarxiv icon

NOLA: Networks as Linear Combination of Low Rank Random Basis

Add code
Oct 04, 2023
Viaarxiv icon

SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers

Add code
Oct 04, 2023
Viaarxiv icon

A Cookbook of Self-Supervised Learning

Add code
Apr 24, 2023
Figure 1 for A Cookbook of Self-Supervised Learning
Figure 2 for A Cookbook of Self-Supervised Learning
Figure 3 for A Cookbook of Self-Supervised Learning
Figure 4 for A Cookbook of Self-Supervised Learning
Viaarxiv icon

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

Add code
Apr 04, 2023
Figure 1 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 2 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 3 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 4 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Viaarxiv icon

Is Multi-Task Learning an Upper Bound for Continual Learning?

Add code
Oct 26, 2022
Figure 1 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 2 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 3 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 4 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Viaarxiv icon

SimA: Simple Softmax-free Attention for Vision Transformers

Add code
Jun 17, 2022
Figure 1 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 2 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 3 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 4 for SimA: Simple Softmax-free Attention for Vision Transformers
Viaarxiv icon

Backdoor Attacks on Vision Transformers

Add code
Jun 16, 2022
Figure 1 for Backdoor Attacks on Vision Transformers
Figure 2 for Backdoor Attacks on Vision Transformers
Figure 3 for Backdoor Attacks on Vision Transformers
Figure 4 for Backdoor Attacks on Vision Transformers
Viaarxiv icon

PRANC: Pseudo RAndom Networks for Compacting deep models

Add code
Jun 16, 2022
Figure 1 for PRANC: Pseudo RAndom Networks for Compacting deep models
Figure 2 for PRANC: Pseudo RAndom Networks for Compacting deep models
Figure 3 for PRANC: Pseudo RAndom Networks for Compacting deep models
Figure 4 for PRANC: Pseudo RAndom Networks for Compacting deep models
Viaarxiv icon