Alert button
Picture for Minchan Jeong

Minchan Jeong

Alert button

Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning

Add code
Bookmark button
Alert button
Feb 13, 2024
Haeju Lee, Minchan Jeong, Se-Young Yun, Kee-Eung Kim

Viaarxiv icon

FedSoL: Bridging Global Alignment and Local Generality in Federated Learning

Add code
Bookmark button
Alert button
Aug 24, 2023
Gihun Lee, Minchan Jeong, Sangmook Kim, Jaehoon Oh, Se-Young Yun

Figure 1 for FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
Figure 2 for FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
Figure 3 for FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
Figure 4 for FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
Viaarxiv icon

Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 03, 2023
Jihwan Oh, Joonkee Kim, Minchan Jeong, Se-Young Yun

Figure 1 for Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning
Figure 2 for Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning
Figure 3 for Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning
Figure 4 for Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning
Viaarxiv icon

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

Add code
Bookmark button
Alert button
Feb 03, 2023
Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun

Figure 1 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Figure 2 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Figure 3 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Figure 4 for Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Viaarxiv icon

Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning

Add code
Bookmark button
Alert button
Jun 06, 2021
Gihun Lee, Yongjin Shin, Minchan Jeong, Se-Young Yun

Figure 1 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 2 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 3 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Figure 4 for Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning
Viaarxiv icon