Picture for Kangwook Lee

Kangwook Lee

Multi-Bin Batching for Increasing LLM Inference Throughput

Add code
Dec 03, 2024
Figure 1 for Multi-Bin Batching for Increasing LLM Inference Throughput
Figure 2 for Multi-Bin Batching for Increasing LLM Inference Throughput
Figure 3 for Multi-Bin Batching for Increasing LLM Inference Throughput
Figure 4 for Multi-Bin Batching for Increasing LLM Inference Throughput
Viaarxiv icon

Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance

Add code
Oct 29, 2024
Figure 1 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Figure 2 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Figure 3 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Figure 4 for Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance
Viaarxiv icon

Parameter-Efficient Fine-Tuning of State Space Models

Add code
Oct 11, 2024
Figure 1 for Parameter-Efficient Fine-Tuning of State Space Models
Figure 2 for Parameter-Efficient Fine-Tuning of State Space Models
Figure 3 for Parameter-Efficient Fine-Tuning of State Space Models
Figure 4 for Parameter-Efficient Fine-Tuning of State Space Models
Viaarxiv icon

Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition

Add code
Oct 08, 2024
Figure 1 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 2 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 3 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 4 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Viaarxiv icon

ENTP: Encoder-only Next Token Prediction

Add code
Oct 02, 2024
Figure 1 for ENTP: Encoder-only Next Token Prediction
Figure 2 for ENTP: Encoder-only Next Token Prediction
Figure 3 for ENTP: Encoder-only Next Token Prediction
Figure 4 for ENTP: Encoder-only Next Token Prediction
Viaarxiv icon

Buffer-based Gradient Projection for Continual Federated Learning

Add code
Sep 03, 2024
Viaarxiv icon

Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks

Add code
Aug 01, 2024
Figure 1 for Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Figure 2 for Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Figure 3 for Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Figure 4 for Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Viaarxiv icon

From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data

Add code
Jun 27, 2024
Figure 1 for From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
Figure 2 for From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
Figure 3 for From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
Figure 4 for From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data
Viaarxiv icon

Dual Operating Modes of In-Context Learning

Add code
Feb 29, 2024
Figure 1 for Dual Operating Modes of In-Context Learning
Figure 2 for Dual Operating Modes of In-Context Learning
Figure 3 for Dual Operating Modes of In-Context Learning
Figure 4 for Dual Operating Modes of In-Context Learning
Viaarxiv icon

Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks

Add code
Feb 06, 2024
Viaarxiv icon