Picture for Mehrtash Harandi

Mehrtash Harandi

Test-Time Instance-Specific Parameter Composition: A New Paradigm for Adaptive Generative Modeling

Add code
Mar 29, 2026
Viaarxiv icon

Sharpness-Aware Minimization in Logit Space Efficiently Enhances Direct Preference Optimization

Add code
Mar 18, 2026
Viaarxiv icon

Antibody: Strengthening Defense Against Harmful Fine-Tuning for Large Language Models via Attenuating Harmful Gradient Influence

Add code
Feb 28, 2026
Viaarxiv icon

Segment Any Tumour: An Uncertainty-Aware Vision Foundation Model for Whole-Body Analysis

Add code
Nov 11, 2025
Viaarxiv icon

Modality Alignment across Trees on Heterogeneous Hyperbolic Manifolds

Add code
Oct 31, 2025
Figure 1 for Modality Alignment across Trees on Heterogeneous Hyperbolic Manifolds
Figure 2 for Modality Alignment across Trees on Heterogeneous Hyperbolic Manifolds
Figure 3 for Modality Alignment across Trees on Heterogeneous Hyperbolic Manifolds
Figure 4 for Modality Alignment across Trees on Heterogeneous Hyperbolic Manifolds
Viaarxiv icon

Curvature Learning for Generalization of Hyperbolic Neural Networks

Add code
Aug 24, 2025
Viaarxiv icon

Exemplar-Free Continual Learning for State Space Models

Add code
May 24, 2025
Viaarxiv icon

Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations

Add code
May 24, 2025
Figure 1 for Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations
Figure 2 for Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations
Figure 3 for Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations
Figure 4 for Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations
Viaarxiv icon

MoKD: Multi-Task Optimization for Knowledge Distillation

Add code
May 13, 2025
Figure 1 for MoKD: Multi-Task Optimization for Knowledge Distillation
Figure 2 for MoKD: Multi-Task Optimization for Knowledge Distillation
Figure 3 for MoKD: Multi-Task Optimization for Knowledge Distillation
Figure 4 for MoKD: Multi-Task Optimization for Knowledge Distillation
Viaarxiv icon

Optimizing Specific and Shared Parameters for Efficient Parameter Tuning

Add code
Apr 04, 2025
Viaarxiv icon