Picture for Changdae Oh

Changdae Oh

VAUQ: Vision-Aware Uncertainty Quantification for LVLM Self-Evaluation

Add code
Feb 24, 2026
Viaarxiv icon

Thinking Makes LLM Agents Introverted: How Mandatory Thinking Can Backfire in User-Engaged Agents

Add code
Feb 08, 2026
Viaarxiv icon

Towards Reducible Uncertainty Modeling for Reliable Large Language Model Agents

Add code
Feb 04, 2026
Viaarxiv icon

How Do Transformers Learn to Associate Tokens: Gradient Leading Terms Bring Mechanistic Interpretability

Add code
Jan 27, 2026
Viaarxiv icon

Visual Instruction Bottleneck Tuning

Add code
May 20, 2025
Viaarxiv icon

DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation

Add code
Oct 03, 2024
Viaarxiv icon

Perturb-and-Compare Approach for Detecting Out-of-Distribution Samples in Constrained Access Environments

Add code
Aug 19, 2024
Viaarxiv icon

Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism

Add code
Jul 18, 2024
Figure 1 for Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism
Figure 2 for Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism
Figure 3 for Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism
Figure 4 for Enhancing Temporal Action Localization: Advanced S6 Modeling with Recurrent Mechanism
Viaarxiv icon

Mitigating the Linguistic Gap with Phonemic Representations for Robust Multilingual Language Understanding

Add code
Feb 22, 2024
Figure 1 for Mitigating the Linguistic Gap with Phonemic Representations for Robust Multilingual Language Understanding
Figure 2 for Mitigating the Linguistic Gap with Phonemic Representations for Robust Multilingual Language Understanding
Figure 3 for Mitigating the Linguistic Gap with Phonemic Representations for Robust Multilingual Language Understanding
Figure 4 for Mitigating the Linguistic Gap with Phonemic Representations for Robust Multilingual Language Understanding
Viaarxiv icon

Towards Calibrated Robust Fine-Tuning of Vision-Language Models

Add code
Nov 06, 2023
Figure 1 for Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Figure 2 for Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Figure 3 for Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Figure 4 for Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Viaarxiv icon