Alert button
Picture for Kyuhong Shim

Kyuhong Shim

Alert button

Expand-and-Quantize: Unsupervised Semantic Segmentation Using High-Dimensional Space and Product Quantization

Add code
Bookmark button
Alert button
Dec 12, 2023
Jiyoung Kim, Kyuhong Shim, Insu Lee, Byonghyo Shim

Viaarxiv icon

Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data

Add code
Bookmark button
Alert button
Aug 31, 2023
Seunghan Yang, Byeonggeun Kim, Kyuhong Shim, Simyung Chang

Figure 1 for Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Figure 2 for Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Figure 3 for Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Figure 4 for Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data
Viaarxiv icon

Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer

Add code
Bookmark button
Alert button
Aug 31, 2023
Kyuhong Shim, Jinkyu Lee, Simyung Chang, Kyuwoong Hwang

Figure 1 for Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer
Figure 2 for Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer
Figure 3 for Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer
Figure 4 for Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer
Viaarxiv icon

Depth-Relative Self Attention for Monocular Depth Estimation

Add code
Bookmark button
Alert button
Apr 25, 2023
Kyuhong Shim, Jiyoung Kim, Gusang Lee, Byonghyo Shim

Figure 1 for Depth-Relative Self Attention for Monocular Depth Estimation
Figure 2 for Depth-Relative Self Attention for Monocular Depth Estimation
Figure 3 for Depth-Relative Self Attention for Monocular Depth Estimation
Figure 4 for Depth-Relative Self Attention for Monocular Depth Estimation
Viaarxiv icon

Semantic-Preserving Augmentation for Robust Image-Text Retrieval

Add code
Bookmark button
Alert button
Mar 10, 2023
Sunwoo Kim, Kyuhong Shim, Luong Trung Nguyen, Byonghyo Shim

Figure 1 for Semantic-Preserving Augmentation for Robust Image-Text Retrieval
Figure 2 for Semantic-Preserving Augmentation for Robust Image-Text Retrieval
Figure 3 for Semantic-Preserving Augmentation for Robust Image-Text Retrieval
Figure 4 for Semantic-Preserving Augmentation for Robust Image-Text Retrieval
Viaarxiv icon

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

Add code
Bookmark button
Alert button
Feb 23, 2023
Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi

Figure 1 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 2 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 3 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 4 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Viaarxiv icon

Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning

Add code
Bookmark button
Alert button
Feb 02, 2023
Jiseob Kim, Kyuhong Shim, Junhan Kim, Byonghyo Shim

Figure 1 for Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning
Figure 2 for Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning
Figure 3 for Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning
Figure 4 for Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning
Viaarxiv icon

Exploring Attention Map Reuse for Efficient Transformer Neural Networks

Add code
Bookmark button
Alert button
Jan 29, 2023
Kyuhong Shim, Jungwook Choi, Wonyong Sung

Figure 1 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Figure 2 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Figure 3 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Figure 4 for Exploring Attention Map Reuse for Efficient Transformer Neural Networks
Viaarxiv icon

A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition

Add code
Bookmark button
Alert button
Oct 01, 2022
Kyuhong Shim, Wonyong Sung

Figure 1 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Figure 2 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Figure 3 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Figure 4 for A Comparison of Transformer, Convolutional, and Recurrent Neural Networks on Phoneme Recognition
Viaarxiv icon

Towards Intelligent Millimeter and Terahertz Communication for 6G: Computer Vision-aided Beamforming

Add code
Bookmark button
Alert button
Sep 06, 2022
Yongjun Ahn, Jinhong Kim, Seungnyun Kim, Kyuhong Shim, Jiyoung Kim, Sangtae Kim, Byonghyo Shim

Figure 1 for Towards Intelligent Millimeter and Terahertz Communication for 6G: Computer Vision-aided Beamforming
Figure 2 for Towards Intelligent Millimeter and Terahertz Communication for 6G: Computer Vision-aided Beamforming
Figure 3 for Towards Intelligent Millimeter and Terahertz Communication for 6G: Computer Vision-aided Beamforming
Figure 4 for Towards Intelligent Millimeter and Terahertz Communication for 6G: Computer Vision-aided Beamforming
Viaarxiv icon