Picture for SangKeun Lee

SangKeun Lee

Korea University

SCRIPT: A Subcharacter Compositional Representation Injection Module for Korean Pre-Trained Language Models

Add code
Apr 14, 2026
Viaarxiv icon

Enhancing Zero-shot Commonsense Reasoning by Integrating Visual Knowledge via Machine Imagination

Add code
Mar 05, 2026
Viaarxiv icon

Bridging the Gap Between Molecule and Textual Descriptions via Substructure-aware Alignment

Add code
Oct 30, 2025
Viaarxiv icon

C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning

Add code
Nov 01, 2024
Viaarxiv icon

CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning

Add code
Oct 31, 2024
Viaarxiv icon

MELT: Materials-aware Continued Pre-training for Language Model Adaptation to Materials Science

Add code
Oct 19, 2024
Viaarxiv icon

Zero-shot Commonsense Reasoning over Machine Imagination

Add code
Oct 12, 2024
Viaarxiv icon

Mentor-KD: Making Small Language Models Better Multi-step Reasoners

Add code
Oct 11, 2024
Figure 1 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Figure 2 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Figure 3 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Figure 4 for Mentor-KD: Making Small Language Models Better Multi-step Reasoners
Viaarxiv icon

DIVE: Towards Descriptive and Diverse Visual Commonsense Generation

Add code
Aug 15, 2024
Viaarxiv icon

Improving Bias Mitigation through Bias Experts in Natural Language Understanding

Add code
Dec 06, 2023
Viaarxiv icon