Alert button
Picture for Yongin Kwon

Yongin Kwon

Alert button

LLMem: Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLMs

Add code
Bookmark button
Alert button
Apr 16, 2024
Taeho Kim, Yanming Wang, Vatshank Chaturvedi, Lokesh Gupta, Seyeon Kim, Yongin Kwon, Sangtae Ha

Viaarxiv icon

Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation

Add code
Bookmark button
Alert button
Mar 18, 2024
Joonhyung Lee, Sangbeom Park, Yongin Kwon, Jemin Lee, Minwook Ahn, Sungjoon Choi

Figure 1 for Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation
Figure 2 for Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation
Figure 3 for Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation
Figure 4 for Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation
Viaarxiv icon

Tensor Slicing and Optimization for Multicore NPUs

Add code
Bookmark button
Alert button
Apr 06, 2023
Rafael Sousa, Marcio Pereira, Yongin Kwon, Taeho Kim, Namsoon Jung, Chang Soo Kim, Michael Frank, Guido Araujo

Figure 1 for Tensor Slicing and Optimization for Multicore NPUs
Figure 2 for Tensor Slicing and Optimization for Multicore NPUs
Figure 3 for Tensor Slicing and Optimization for Multicore NPUs
Figure 4 for Tensor Slicing and Optimization for Multicore NPUs
Viaarxiv icon

Q-HyViT: Post-Training Quantization for Hybrid Vision Transformer with Bridge Block Reconstruction

Add code
Bookmark button
Alert button
Mar 22, 2023
Jemin Lee, Yongin Kwon, Jeman Park, Misun Yu, Hwanjun Song

Figure 1 for Q-HyViT: Post-Training Quantization for Hybrid Vision Transformer with Bridge Block Reconstruction
Figure 2 for Q-HyViT: Post-Training Quantization for Hybrid Vision Transformer with Bridge Block Reconstruction
Figure 3 for Q-HyViT: Post-Training Quantization for Hybrid Vision Transformer with Bridge Block Reconstruction
Figure 4 for Q-HyViT: Post-Training Quantization for Hybrid Vision Transformer with Bridge Block Reconstruction
Viaarxiv icon

CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution

Add code
Bookmark button
Alert button
Jul 04, 2022
Taeho Kim, Yongin Kwon, Jemin Lee, Sangtae Ha

Figure 1 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Figure 2 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Figure 3 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Figure 4 for CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
Viaarxiv icon

Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment

Add code
Bookmark button
Alert button
Feb 21, 2022
Jemin Lee, Misun Yu, Yongin Kwon, Taeho Kim

Figure 1 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Figure 2 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Figure 3 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Figure 4 for Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Viaarxiv icon