Picture for Dongmin Park

Dongmin Park

VLM-SubtleBench: How Far Are VLMs from Human-Level Subtle Comparative Reasoning?

Add code
Mar 09, 2026
Viaarxiv icon

See and Fix the Flaws: Enabling VLMs and Diffusion Models to Comprehend Visual Artifacts via Agentic Data Synthesis

Add code
Feb 24, 2026
Viaarxiv icon

THINKSAFE: Self-Generated Safety Alignment for Reasoning Models

Add code
Jan 30, 2026
Viaarxiv icon

Active Learning for Continual Learning: Keeping the Past Alive in the Present

Add code
Jan 24, 2025
Figure 1 for Active Learning for Continual Learning: Keeping the Past Alive in the Present
Figure 2 for Active Learning for Continual Learning: Keeping the Past Alive in the Present
Figure 3 for Active Learning for Continual Learning: Keeping the Past Alive in the Present
Figure 4 for Active Learning for Continual Learning: Keeping the Past Alive in the Present
Viaarxiv icon

Alignment without Over-optimization: Training-Free Solution for Diffusion Models

Add code
Jan 10, 2025
Figure 1 for Alignment without Over-optimization: Training-Free Solution for Diffusion Models
Figure 2 for Alignment without Over-optimization: Training-Free Solution for Diffusion Models
Figure 3 for Alignment without Over-optimization: Training-Free Solution for Diffusion Models
Figure 4 for Alignment without Over-optimization: Training-Free Solution for Diffusion Models
Viaarxiv icon

Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance

Add code
Oct 29, 2024
Figure 1 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Figure 2 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Figure 3 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Figure 4 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Viaarxiv icon

Mitigating Dialogue Hallucination for Large Multi-modal Models via Adversarial Instruction Tuning

Add code
Mar 15, 2024
Figure 1 for Mitigating Dialogue Hallucination for Large Multi-modal Models via Adversarial Instruction Tuning
Figure 2 for Mitigating Dialogue Hallucination for Large Multi-modal Models via Adversarial Instruction Tuning
Figure 3 for Mitigating Dialogue Hallucination for Large Multi-modal Models via Adversarial Instruction Tuning
Figure 4 for Mitigating Dialogue Hallucination for Large Multi-modal Models via Adversarial Instruction Tuning
Viaarxiv icon

Prioritizing Informative Features and Examples for Deep Learning from Noisy Data

Add code
Feb 27, 2024
Figure 1 for Prioritizing Informative Features and Examples for Deep Learning from Noisy Data
Figure 2 for Prioritizing Informative Features and Examples for Deep Learning from Noisy Data
Figure 3 for Prioritizing Informative Features and Examples for Deep Learning from Noisy Data
Figure 4 for Prioritizing Informative Features and Examples for Deep Learning from Noisy Data
Viaarxiv icon

Adaptive Shortcut Debiasing for Online Continual Learning

Add code
Dec 14, 2023
Figure 1 for Adaptive Shortcut Debiasing for Online Continual Learning
Figure 2 for Adaptive Shortcut Debiasing for Online Continual Learning
Figure 3 for Adaptive Shortcut Debiasing for Online Continual Learning
Figure 4 for Adaptive Shortcut Debiasing for Online Continual Learning
Viaarxiv icon

One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning

Add code
Nov 18, 2023
Figure 1 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Figure 2 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Figure 3 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Figure 4 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Viaarxiv icon