Alert button
Picture for Jongwoo Ko

Jongwoo Ko

Alert button

DistiLLM: Towards Streamlined Distillation for Large Language Models

Add code
Bookmark button
Alert button
Feb 06, 2024
Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun

Viaarxiv icon

Improving Adaptability and Generalizability of Efficient Transfer Learning for Vision-Language Models

Add code
Bookmark button
Alert button
Nov 27, 2023
Yongjin Yang, Jongwoo Ko, Se-Young Yun

Viaarxiv icon

Fine tuning Pre trained Models for Robustness Under Noisy Labels

Add code
Bookmark button
Alert button
Oct 24, 2023
Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun

Viaarxiv icon

NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models

Add code
Bookmark button
Alert button
Oct 16, 2023
Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun

Viaarxiv icon

Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding

Add code
Bookmark button
Alert button
Oct 09, 2023
Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun

Viaarxiv icon

CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition

Add code
Bookmark button
Alert button
Feb 10, 2023
Sumyeong Ahn, Jongwoo Ko, Se-Young Yun

Figure 1 for CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition
Figure 2 for CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition
Figure 3 for CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition
Figure 4 for CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition
Viaarxiv icon

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

Add code
Bookmark button
Alert button
Feb 03, 2023
Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun

Figure 1 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Figure 2 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Figure 3 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Figure 4 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Viaarxiv icon

Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks

Add code
Bookmark button
Alert button
Oct 18, 2022
Jaehoon Oh, Jongwoo Ko, Se-Young Yun

Figure 1 for Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
Figure 2 for Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
Figure 3 for Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
Figure 4 for Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
Viaarxiv icon

ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise

Add code
Bookmark button
Alert button
Jun 15, 2022
Jongwoo Ko, Bongsoo Yi, Se-Young Yun

Figure 1 for ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
Figure 2 for ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
Figure 3 for ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
Figure 4 for ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
Viaarxiv icon

Self-Contrastive Learning

Add code
Bookmark button
Alert button
Jul 14, 2021
Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun

Figure 1 for Self-Contrastive Learning
Figure 2 for Self-Contrastive Learning
Figure 3 for Self-Contrastive Learning
Figure 4 for Self-Contrastive Learning
Viaarxiv icon