Picture for Ahmad Sajedi

Ahmad Sajedi

Data-to-Model Distillation: Data-Efficient Learning Framework

Add code
Nov 19, 2024
Viaarxiv icon

Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios

Add code
Oct 22, 2024
Figure 1 for Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Figure 2 for Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Figure 3 for Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Figure 4 for Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Viaarxiv icon

GSTAM: Efficient Graph Distillation with Structural Attention-Matching

Add code
Aug 29, 2024
Figure 1 for GSTAM: Efficient Graph Distillation with Structural Attention-Matching
Figure 2 for GSTAM: Efficient Graph Distillation with Structural Attention-Matching
Figure 3 for GSTAM: Efficient Graph Distillation with Structural Attention-Matching
Figure 4 for GSTAM: Efficient Graph Distillation with Structural Attention-Matching
Viaarxiv icon

ATOM: Attention Mixer for Efficient Dataset Distillation

Add code
May 02, 2024
Figure 1 for ATOM: Attention Mixer for Efficient Dataset Distillation
Figure 2 for ATOM: Attention Mixer for Efficient Dataset Distillation
Figure 3 for ATOM: Attention Mixer for Efficient Dataset Distillation
Figure 4 for ATOM: Attention Mixer for Efficient Dataset Distillation
Viaarxiv icon

ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification

Add code
Jan 02, 2024
Figure 1 for ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification
Figure 2 for ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification
Figure 3 for ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification
Figure 4 for ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification
Viaarxiv icon

DataDAM: Efficient Dataset Distillation with Attention Matching

Add code
Sep 29, 2023
Figure 1 for DataDAM: Efficient Dataset Distillation with Attention Matching
Figure 2 for DataDAM: Efficient Dataset Distillation with Attention Matching
Figure 3 for DataDAM: Efficient Dataset Distillation with Attention Matching
Figure 4 for DataDAM: Efficient Dataset Distillation with Attention Matching
Viaarxiv icon

End-to-End Supervised Multilabel Contrastive Learning

Add code
Jul 08, 2023
Figure 1 for End-to-End Supervised Multilabel Contrastive Learning
Figure 2 for End-to-End Supervised Multilabel Contrastive Learning
Figure 3 for End-to-End Supervised Multilabel Contrastive Learning
Figure 4 for End-to-End Supervised Multilabel Contrastive Learning
Viaarxiv icon

A New Probabilistic Distance Metric With Application In Gaussian Mixture Reduction

Add code
Jun 12, 2023
Figure 1 for A New Probabilistic Distance Metric With Application In Gaussian Mixture Reduction
Figure 2 for A New Probabilistic Distance Metric With Application In Gaussian Mixture Reduction
Figure 3 for A New Probabilistic Distance Metric With Application In Gaussian Mixture Reduction
Viaarxiv icon

Subclass Knowledge Distillation with Known Subclass Labels

Add code
Jul 17, 2022
Figure 1 for Subclass Knowledge Distillation with Known Subclass Labels
Figure 2 for Subclass Knowledge Distillation with Known Subclass Labels
Figure 3 for Subclass Knowledge Distillation with Known Subclass Labels
Figure 4 for Subclass Knowledge Distillation with Known Subclass Labels
Viaarxiv icon

On the Efficiency of Subclass Knowledge Distillation in Classification Tasks

Add code
Sep 12, 2021
Figure 1 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Figure 2 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Figure 3 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Figure 4 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Viaarxiv icon