Picture for Hamed Pirsiavash

Hamed Pirsiavash

University of Maryland Baltimore County

One Category One Prompt: Dataset Distillation using Diffusion Models

Mar 11, 2024
Figure 1 for One Category One Prompt: Dataset Distillation using Diffusion Models
Figure 2 for One Category One Prompt: Dataset Distillation using Diffusion Models
Figure 3 for One Category One Prompt: Dataset Distillation using Diffusion Models
Figure 4 for One Category One Prompt: Dataset Distillation using Diffusion Models
Viaarxiv icon

GeNIe: Generative Hard Negative Images Through Diffusion

Add code
Dec 05, 2023
Viaarxiv icon

Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization

Add code
Nov 30, 2023
Figure 1 for Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization
Figure 2 for Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization
Figure 3 for Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization
Figure 4 for Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization
Viaarxiv icon

BrainWash: A Poisoning Attack to Forget in Continual Learning

Nov 24, 2023
Viaarxiv icon

SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers

Add code
Oct 04, 2023
Figure 1 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Figure 2 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Figure 3 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Figure 4 for SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
Viaarxiv icon

NOLA: Networks as Linear Combination of Low Rank Random Basis

Add code
Oct 04, 2023
Figure 1 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Figure 2 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Figure 3 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Figure 4 for NOLA: Networks as Linear Combination of Low Rank Random Basis
Viaarxiv icon

A Cookbook of Self-Supervised Learning

Add code
Apr 24, 2023
Figure 1 for A Cookbook of Self-Supervised Learning
Figure 2 for A Cookbook of Self-Supervised Learning
Figure 3 for A Cookbook of Self-Supervised Learning
Figure 4 for A Cookbook of Self-Supervised Learning
Viaarxiv icon

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

Add code
Apr 04, 2023
Figure 1 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 2 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 3 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 4 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Viaarxiv icon

Is Multi-Task Learning an Upper Bound for Continual Learning?

Oct 26, 2022
Figure 1 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 2 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 3 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Figure 4 for Is Multi-Task Learning an Upper Bound for Continual Learning?
Viaarxiv icon

SimA: Simple Softmax-free Attention for Vision Transformers

Add code
Jun 17, 2022
Figure 1 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 2 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 3 for SimA: Simple Softmax-free Attention for Vision Transformers
Figure 4 for SimA: Simple Softmax-free Attention for Vision Transformers
Viaarxiv icon