Alert button
Picture for Minsoo Kim

Minsoo Kim

Alert button

VIGFace: Virtual Identity Generation Model for Face Image Synthesis

Add code
Bookmark button
Alert button
Mar 13, 2024
Minsoo Kim, Min-Cheol Sagong, Gi Pyo Nam, Junghyun Cho, Ig-Jae Kim

Figure 1 for VIGFace: Virtual Identity Generation Model for Face Image Synthesis
Figure 2 for VIGFace: Virtual Identity Generation Model for Face Image Synthesis
Figure 3 for VIGFace: Virtual Identity Generation Model for Face Image Synthesis
Figure 4 for VIGFace: Virtual Identity Generation Model for Face Image Synthesis
Viaarxiv icon

IG-FIQA: Improving Face Image Quality Assessment through Intra-class Variance Guidance robust to Inaccurate Pseudo-Labels

Add code
Bookmark button
Alert button
Mar 13, 2024
Minsoo Kim, Gi Pyo Nam, Haksub Kim, Haesol Park, Ig-Jae Kim

Figure 1 for IG-FIQA: Improving Face Image Quality Assessment through Intra-class Variance Guidance robust to Inaccurate Pseudo-Labels
Figure 2 for IG-FIQA: Improving Face Image Quality Assessment through Intra-class Variance Guidance robust to Inaccurate Pseudo-Labels
Figure 3 for IG-FIQA: Improving Face Image Quality Assessment through Intra-class Variance Guidance robust to Inaccurate Pseudo-Labels
Figure 4 for IG-FIQA: Improving Face Image Quality Assessment through Intra-class Variance Guidance robust to Inaccurate Pseudo-Labels
Viaarxiv icon

Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization

Add code
Bookmark button
Alert button
Nov 09, 2023
Jangwhan Lee, Minsoo Kim, Seungcheol Baek, Seok Joong Hwang, Wonyong Sung, Jungwook Choi

Viaarxiv icon

Token-Scaled Logit Distillation for Ternary Weight Generative Language Models

Add code
Bookmark button
Alert button
Aug 13, 2023
Minsoo Kim, Sihwa Lee, Janghwan Lee, Sukjin Hong, Du-Seong Chang, Wonyong Sung, Jungwook Choi

Figure 1 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 2 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 3 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Figure 4 for Token-Scaled Logit Distillation for Ternary Weight Generative Language Models
Viaarxiv icon

Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization

Add code
Bookmark button
Alert button
Jul 02, 2023
Minsoo Kim, Hongseok Kim

Figure 1 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Figure 2 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Figure 3 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Figure 4 for Self-supervised Equality Embedded Deep Lagrange Dual for Approximate Constrained Optimization
Viaarxiv icon

Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding

Add code
Bookmark button
Alert button
Mar 07, 2023
Minyoung Hwang, Jaeyeon Jeong, Minsoo Kim, Yoonseon Oh, Songhwai Oh

Figure 1 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Figure 2 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Figure 3 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Figure 4 for Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
Viaarxiv icon

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

Add code
Bookmark button
Alert button
Feb 23, 2023
Minsoo Kim, Kyuhong Shim, Seongmin Park, Wonyong Sung, Jungwook Choi

Figure 1 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 2 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 3 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Figure 4 for Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers
Viaarxiv icon

Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Add code
Bookmark button
Alert button
Nov 20, 2022
Minsoo Kim, Sihwa Lee, Sukjin Hong, Du-Seong Chang, Jungwook Choi

Figure 1 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 2 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 3 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Figure 4 for Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders
Viaarxiv icon