Alert button
Picture for Jangho Kim

Jangho Kim

Alert button

Magnitude Attention-based Dynamic Pruning

Add code
Bookmark button
Alert button
Jun 08, 2023
Jihye Back, Namhyuk Ahn, Jangho Kim

Figure 1 for Magnitude Attention-based Dynamic Pruning
Figure 2 for Magnitude Attention-based Dynamic Pruning
Figure 3 for Magnitude Attention-based Dynamic Pruning
Figure 4 for Magnitude Attention-based Dynamic Pruning
Viaarxiv icon

QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design

Add code
Bookmark button
Alert button
Jun 28, 2022
Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang

Figure 1 for QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design
Figure 2 for QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design
Figure 3 for QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design
Figure 4 for QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design
Viaarxiv icon

Domain Generalization with Relaxed Instance Frequency-wise Normalization for Multi-device Acoustic Scene Classification

Add code
Bookmark button
Alert button
Jun 24, 2022
Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, Juntae Lee, Simyung Chang

Figure 1 for Domain Generalization with Relaxed Instance Frequency-wise Normalization for Multi-device Acoustic Scene Classification
Figure 2 for Domain Generalization with Relaxed Instance Frequency-wise Normalization for Multi-device Acoustic Scene Classification
Figure 3 for Domain Generalization with Relaxed Instance Frequency-wise Normalization for Multi-device Acoustic Scene Classification
Figure 4 for Domain Generalization with Relaxed Instance Frequency-wise Normalization for Multi-device Acoustic Scene Classification
Viaarxiv icon

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation

Add code
Bookmark button
Alert button
Mar 03, 2022
KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak

Figure 1 for Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation
Figure 2 for Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation
Figure 3 for Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation
Figure 4 for Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation
Viaarxiv icon

Self-Distilled Self-Supervised Representation Learning

Add code
Bookmark button
Alert button
Nov 25, 2021
Jiho Jang, Seonhoon Kim, Kiyoon Yoo, Jangho Kim, Nojun Kwak

Figure 1 for Self-Distilled Self-Supervised Representation Learning
Figure 2 for Self-Distilled Self-Supervised Representation Learning
Figure 3 for Self-Distilled Self-Supervised Representation Learning
Figure 4 for Self-Distilled Self-Supervised Representation Learning
Viaarxiv icon

Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization

Add code
Bookmark button
Alert button
Nov 12, 2021
Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang

Figure 1 for Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization
Figure 2 for Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization
Viaarxiv icon

Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights

Add code
Bookmark button
Alert button
Sep 10, 2021
Jangho Kim, Jayeon Yoo, Yeji Song, KiYoon Yoo, Nojun Kwak

Figure 1 for Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights
Figure 2 for Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights
Figure 3 for Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights
Figure 4 for Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights
Viaarxiv icon

PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation

Add code
Bookmark button
Alert button
Jun 25, 2021
Jangho Kim, Simyung Chang, Nojun Kwak

Figure 1 for PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
Figure 2 for PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
Figure 3 for PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation
Viaarxiv icon

Prototype-based Personalized Pruning

Add code
Bookmark button
Alert button
Mar 25, 2021
Jangho Kim, Simyung Chang, Sungrack Yun, Nojun Kwak

Figure 1 for Prototype-based Personalized Pruning
Figure 2 for Prototype-based Personalized Pruning
Figure 3 for Prototype-based Personalized Pruning
Figure 4 for Prototype-based Personalized Pruning
Viaarxiv icon

Position-based Scaled Gradient for Model Quantization and Sparse Training

Add code
Bookmark button
Alert button
Jun 10, 2020
Jangho Kim, KiYoon Yoo, Nojun Kwak

Figure 1 for Position-based Scaled Gradient for Model Quantization and Sparse Training
Figure 2 for Position-based Scaled Gradient for Model Quantization and Sparse Training
Figure 3 for Position-based Scaled Gradient for Model Quantization and Sparse Training
Figure 4 for Position-based Scaled Gradient for Model Quantization and Sparse Training
Viaarxiv icon