Picture for Jae-Joon Kim

Jae-Joon Kim

LiteStage: Latency-aware Layer Skipping for Multi-stage Reasoning

Add code
Oct 16, 2025
Viaarxiv icon

Retrospective Sparse Attention for Efficient Long-Context Generation

Add code
Aug 12, 2025
Viaarxiv icon

Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning

Add code
May 20, 2025
Figure 1 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Figure 2 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Figure 3 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Figure 4 for Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning
Viaarxiv icon

FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation

Add code
Feb 03, 2025
Viaarxiv icon

COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators

Add code
Jan 12, 2025
Figure 1 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Figure 2 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Figure 3 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Figure 4 for COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Viaarxiv icon

Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models

Add code
Jun 18, 2024
Figure 1 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Figure 2 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Figure 3 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Figure 4 for Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
Viaarxiv icon

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

Add code
Feb 14, 2024
Viaarxiv icon

Squeezing Large-Scale Diffusion Models for Mobile

Add code
Jul 03, 2023
Viaarxiv icon

INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold

Add code
Apr 18, 2022
Figure 1 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 2 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 3 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Figure 4 for INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
Viaarxiv icon

Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution

Add code
Dec 02, 2020
Figure 1 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 2 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 3 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Figure 4 for Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution
Viaarxiv icon