Picture for Yongil Kim

Yongil Kim

Can You Trick the Grader? Adversarial Persuasion of LLM Judges

Add code
Aug 11, 2025
Viaarxiv icon

EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes

Add code
Jul 15, 2025
Viaarxiv icon

Don't Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation

Add code
May 22, 2025
Viaarxiv icon

Reasoning Models Better Express Their Confidence

Add code
May 20, 2025
Viaarxiv icon

EXAONE Deep: Reasoning Enhanced Language Models

Add code
Mar 16, 2025
Viaarxiv icon

LLMs can be easily Confused by Instructional Distractions

Add code
Feb 05, 2025
Figure 1 for LLMs can be easily Confused by Instructional Distractions
Figure 2 for LLMs can be easily Confused by Instructional Distractions
Figure 3 for LLMs can be easily Confused by Instructional Distractions
Figure 4 for LLMs can be easily Confused by Instructional Distractions
Viaarxiv icon

EXAONE 3.5: Series of Large Language Models for Real-world Use Cases

Add code
Dec 09, 2024
Figure 1 for EXAONE 3.5: Series of Large Language Models for Real-world Use Cases
Figure 2 for EXAONE 3.5: Series of Large Language Models for Real-world Use Cases
Figure 3 for EXAONE 3.5: Series of Large Language Models for Real-world Use Cases
Figure 4 for EXAONE 3.5: Series of Large Language Models for Real-world Use Cases
Viaarxiv icon

Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation

Add code
Oct 28, 2024
Viaarxiv icon

SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models

Add code
Oct 25, 2024
Viaarxiv icon

MP2D: An Automated Topic Shift Dialogue Generation Framework Leveraging Knowledge Graphs

Add code
Mar 09, 2024
Viaarxiv icon