Alert button
Picture for Junghyun Koo

Junghyun Koo

Alert button

SMITIN: Self-Monitored Inference-Time INtervention for Generative Music Transformers

Add code
Bookmark button
Alert button
Apr 02, 2024
Junghyun Koo, Gordon Wichern, Francois G. Germain, Sameer Khurana, Jonathan Le Roux

Viaarxiv icon

DDD: A Perceptually Superior Low-Response-Time DNN-based Declipper

Add code
Bookmark button
Alert button
Jan 08, 2024
Jayeon Yi, Junghyun Koo, Kyogu Lee

Viaarxiv icon

Exploiting Time-Frequency Conformers for Music Audio Enhancement

Add code
Bookmark button
Alert button
Aug 24, 2023
Yunkee Chae, Junghyun Koo, Sungho Lee, Kyogu Lee

Figure 1 for Exploiting Time-Frequency Conformers for Music Audio Enhancement
Figure 2 for Exploiting Time-Frequency Conformers for Music Audio Enhancement
Figure 3 for Exploiting Time-Frequency Conformers for Music Audio Enhancement
Figure 4 for Exploiting Time-Frequency Conformers for Music Audio Enhancement
Viaarxiv icon

Self-refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data

Add code
Bookmark button
Alert button
Jul 24, 2023
Junghyun Koo, Yunkee Chae, Chang-Bin Jeon, Kyogu Lee

Figure 1 for Self-refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data
Figure 2 for Self-refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data
Figure 3 for Self-refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data
Figure 4 for Self-refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data
Viaarxiv icon

Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects

Add code
Bookmark button
Alert button
Nov 04, 2022
Junghyun Koo, Marco A. Martinez-Ramirez, Wei-Hsiang Liao, Stefan Uhlich, Kyogu Lee, Yuki Mitsufuji

Figure 1 for Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects
Figure 2 for Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects
Figure 3 for Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects
Figure 4 for Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects
Viaarxiv icon

Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification

Add code
Bookmark button
Alert button
Apr 06, 2022
Jin Woo Lee, Eungbeom Kim, Junghyun Koo, Kyogu Lee

Figure 1 for Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification
Figure 2 for Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification
Figure 3 for Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification
Figure 4 for Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification
Viaarxiv icon

End-to-end Music Remastering System Using Self-supervised and Adversarial Training

Add code
Bookmark button
Alert button
Feb 17, 2022
Junghyun Koo, Seungryeol Paik, Kyogu Lee

Figure 1 for End-to-end Music Remastering System Using Self-supervised and Adversarial Training
Figure 2 for End-to-end Music Remastering System Using Self-supervised and Adversarial Training
Figure 3 for End-to-end Music Remastering System Using Self-supervised and Adversarial Training
Figure 4 for End-to-end Music Remastering System Using Self-supervised and Adversarial Training
Viaarxiv icon

Reverb Conversion of Mixed Vocal Tracks Using an End-to-end Convolutional Deep Neural Network

Add code
Bookmark button
Alert button
Mar 03, 2021
Junghyun Koo, Seungryeol Paik, Kyogu Lee

Figure 1 for Reverb Conversion of Mixed Vocal Tracks Using an End-to-end Convolutional Deep Neural Network
Figure 2 for Reverb Conversion of Mixed Vocal Tracks Using an End-to-end Convolutional Deep Neural Network
Figure 3 for Reverb Conversion of Mixed Vocal Tracks Using an End-to-end Convolutional Deep Neural Network
Figure 4 for Reverb Conversion of Mixed Vocal Tracks Using an End-to-end Convolutional Deep Neural Network
Viaarxiv icon

Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition

Add code
Bookmark button
Alert button
Sep 09, 2020
Junghyun Koo, Jie Hwan Lee, Jaewoo Pyo, Yujin Jo, Kyogu Lee

Figure 1 for Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition
Figure 2 for Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition
Figure 3 for Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition
Figure 4 for Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition
Viaarxiv icon