Alert button
Picture for Sungnyun Kim

Sungnyun Kim

Alert button

DistiLLM: Towards Streamlined Distillation for Large Language Models

Add code
Bookmark button
Alert button
Feb 06, 2024
Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun

Viaarxiv icon

STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models

Add code
Bookmark button
Alert button
Dec 14, 2023
Kangwook Jang, Sungnyun Kim, Hoirin Kim

Figure 1 for STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
Figure 2 for STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
Figure 3 for STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
Figure 4 for STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
Viaarxiv icon

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

Add code
Bookmark button
Alert button
May 24, 2023
Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn

Figure 1 for DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
Figure 2 for DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
Figure 3 for DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
Figure 4 for DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
Viaarxiv icon

Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification

Add code
Bookmark button
Alert button
May 23, 2023
Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun

Figure 1 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Figure 2 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Figure 3 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Figure 4 for Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
Viaarxiv icon

Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation

Add code
Bookmark button
Alert button
May 19, 2023
Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim

Figure 1 for Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
Figure 2 for Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
Figure 3 for Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
Figure 4 for Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
Viaarxiv icon

Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning

Add code
Bookmark button
Alert button
Mar 24, 2023
Sungnyun Kim, Sangmin Bae, Se-Young Yun

Figure 1 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Figure 2 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Figure 3 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Figure 4 for Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
Viaarxiv icon

Revisiting the Updates of a Pre-trained Model for Few-shot Learning

Add code
Bookmark button
Alert button
May 13, 2022
Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun

Figure 1 for Revisiting the Updates of a Pre-trained Model for Few-shot Learning
Figure 2 for Revisiting the Updates of a Pre-trained Model for Few-shot Learning
Figure 3 for Revisiting the Updates of a Pre-trained Model for Few-shot Learning
Figure 4 for Revisiting the Updates of a Pre-trained Model for Few-shot Learning
Viaarxiv icon

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning

Add code
Bookmark button
Alert button
May 11, 2022
Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

Figure 1 for ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
Figure 2 for ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
Figure 3 for ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
Figure 4 for ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
Viaarxiv icon

Understanding Cross-Domain Few-Shot Learning: An Experimental Study

Add code
Bookmark button
Alert button
Feb 08, 2022
Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

Figure 1 for Understanding Cross-Domain Few-Shot Learning: An Experimental Study
Figure 2 for Understanding Cross-Domain Few-Shot Learning: An Experimental Study
Figure 3 for Understanding Cross-Domain Few-Shot Learning: An Experimental Study
Figure 4 for Understanding Cross-Domain Few-Shot Learning: An Experimental Study
Viaarxiv icon