Picture for Young Jin Kim

Young Jin Kim

Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation

Add code
Feb 02, 2024
Viaarxiv icon

PEMA: Plug-in External Memory Adaptation for Language Models

Add code
Nov 14, 2023
Viaarxiv icon

Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness

Add code
Oct 03, 2023
Viaarxiv icon

A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models

Add code
Sep 20, 2023
Viaarxiv icon

Task-Based MoE for Multitask Multilingual Machine Translation

Add code
Sep 11, 2023
Figure 1 for Task-Based MoE for Multitask Multilingual Machine Translation
Figure 2 for Task-Based MoE for Multitask Multilingual Machine Translation
Figure 3 for Task-Based MoE for Multitask Multilingual Machine Translation
Figure 4 for Task-Based MoE for Multitask Multilingual Machine Translation
Viaarxiv icon

FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs

Add code
Aug 16, 2023
Figure 1 for FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs
Figure 2 for FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs
Figure 3 for FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs
Figure 4 for FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs
Viaarxiv icon

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

Add code
Feb 18, 2023
Figure 1 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 2 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 3 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Figure 4 for How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation
Viaarxiv icon

Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production

Add code
Nov 18, 2022
Figure 1 for Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production
Figure 2 for Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production
Figure 3 for Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production
Figure 4 for Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production
Viaarxiv icon

AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers

Add code
Oct 14, 2022
Figure 1 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Figure 2 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Figure 3 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Figure 4 for AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
Viaarxiv icon

Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU

Add code
Aug 14, 2022
Figure 1 for Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU
Figure 2 for Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU
Figure 3 for Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU
Figure 4 for Fast Vocabulary Projection Method via Clustering for Multilingual Machine Translation on GPU
Viaarxiv icon