Picture for Hyuhng Joon Kim

Hyuhng Joon Kim

Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs

Add code
Feb 19, 2025
Figure 1 for Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Figure 2 for Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Figure 3 for Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Figure 4 for Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs
Viaarxiv icon

When to Speak, When to Abstain: Contrastive Decoding with Abstention

Add code
Dec 17, 2024
Figure 1 for When to Speak, When to Abstain: Contrastive Decoding with Abstention
Figure 2 for When to Speak, When to Abstain: Contrastive Decoding with Abstention
Figure 3 for When to Speak, When to Abstain: Contrastive Decoding with Abstention
Figure 4 for When to Speak, When to Abstain: Contrastive Decoding with Abstention
Viaarxiv icon

Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts

Add code
Aug 02, 2024
Viaarxiv icon

Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection

Add code
Jun 24, 2024
Figure 1 for Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
Figure 2 for Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
Figure 3 for Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
Figure 4 for Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
Viaarxiv icon

Aligning Language Models to Explicitly Handle Ambiguity

Add code
Apr 18, 2024
Figure 1 for Aligning Language Models to Explicitly Handle Ambiguity
Figure 2 for Aligning Language Models to Explicitly Handle Ambiguity
Figure 3 for Aligning Language Models to Explicitly Handle Ambiguity
Figure 4 for Aligning Language Models to Explicitly Handle Ambiguity
Viaarxiv icon

Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP

Add code
Oct 23, 2023
Viaarxiv icon

Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning

Add code
Jan 30, 2023
Figure 1 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Figure 2 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Figure 3 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Figure 4 for Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
Viaarxiv icon

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

Add code
Dec 28, 2022
Viaarxiv icon

Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator

Add code
Jun 16, 2022
Figure 1 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Figure 2 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Figure 3 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Figure 4 for Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Viaarxiv icon

Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

Add code
May 25, 2022
Figure 1 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 2 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 3 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Figure 4 for Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations
Viaarxiv icon