Alert button
Picture for Doyoung Kim

Doyoung Kim

Alert button

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards

Add code
Bookmark button
Alert button
Apr 16, 2024
Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo

Viaarxiv icon

Semiparametric Token-Sequence Co-Supervision

Add code
Bookmark button
Alert button
Mar 14, 2024
Hyunji Lee, Doyoung Kim, Jihoon Jun, Sejune Joo, Joel Jang, Kyoung-Woon On, Minjoon Seo

Figure 1 for Semiparametric Token-Sequence Co-Supervision
Figure 2 for Semiparametric Token-Sequence Co-Supervision
Figure 3 for Semiparametric Token-Sequence Co-Supervision
Figure 4 for Semiparametric Token-Sequence Co-Supervision
Viaarxiv icon

Joint Mechanical and Electrical Adjustment of IRS-aided LEO Satellite MIMO Communications

Add code
Bookmark button
Alert button
Jan 12, 2024
Doyoung Kim, Seongah Jeong

Viaarxiv icon

Adaptive Shortcut Debiasing for Online Continual Learning

Add code
Bookmark button
Alert button
Dec 14, 2023
Doyoung Kim, Dongmin Park, Yooju Shin, Jihwan Bang, Hwanjun Song, Jae-Gil Lee

Figure 1 for Adaptive Shortcut Debiasing for Online Continual Learning
Figure 2 for Adaptive Shortcut Debiasing for Online Continual Learning
Figure 3 for Adaptive Shortcut Debiasing for Online Continual Learning
Figure 4 for Adaptive Shortcut Debiasing for Online Continual Learning
Viaarxiv icon

One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning

Add code
Bookmark button
Alert button
Nov 18, 2023
Doyoung Kim, Susik Yoon, Dongmin Park, Youngjun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee

Figure 1 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Figure 2 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Figure 3 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Figure 4 for One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
Viaarxiv icon

How Well Do Large Language Models Truly Ground?

Add code
Bookmark button
Alert button
Nov 15, 2023
Hyunji Lee, Sejune Joo, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon On, Minjoon Seo

Viaarxiv icon

Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy

Add code
Bookmark button
Alert button
Nov 02, 2023
Dongmin Park, Seola Choi, Doyoung Kim, Hwanjun Song, Jae-Gil Lee

Viaarxiv icon

Energy-Efficient Secure Offloading System Designed via UAV-Mounted Intelligent Reflecting Surface for Resilience Enhancement

Add code
Bookmark button
Alert button
Sep 29, 2023
Doyoung Kim, Seongah Jeong, Jinkyu Kang

Figure 1 for Energy-Efficient Secure Offloading System Designed via UAV-Mounted Intelligent Reflecting Surface for Resilience Enhancement
Figure 2 for Energy-Efficient Secure Offloading System Designed via UAV-Mounted Intelligent Reflecting Surface for Resilience Enhancement
Figure 3 for Energy-Efficient Secure Offloading System Designed via UAV-Mounted Intelligent Reflecting Surface for Resilience Enhancement
Figure 4 for Energy-Efficient Secure Offloading System Designed via UAV-Mounted Intelligent Reflecting Surface for Resilience Enhancement
Viaarxiv icon

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Add code
Bookmark button
Alert button
Jul 20, 2023
Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo

Figure 1 for FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Figure 2 for FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Figure 3 for FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Figure 4 for FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Viaarxiv icon

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Add code
Bookmark button
Alert button
May 23, 2023
Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

Figure 1 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Figure 2 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Figure 3 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Figure 4 for The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Viaarxiv icon