Picture for Eunki Kim

Eunki Kim

Multi-Objective Task-Aware Predictor for Image-Text Alignment

Add code
Oct 01, 2025
Viaarxiv icon

On the Robustness of Reward Models for Language Model Alignment

Add code
May 12, 2025
Viaarxiv icon

Sightation Counts: Leveraging Sighted User Feedback in Building a BLV-aligned Dataset of Diagram Descriptions

Add code
Mar 17, 2025
Viaarxiv icon

AlphaPO -- Reward shape matters for LLM alignment

Add code
Jan 07, 2025
Viaarxiv icon

I0T: Embedding Standardization Method Towards Zero Modality Gap

Add code
Dec 18, 2024
Viaarxiv icon