Picture for Hongye Jin

Hongye Jin

KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches

Add code
Jul 01, 2024
Figure 1 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 2 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 3 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Figure 4 for KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Viaarxiv icon

KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache

Add code
Feb 05, 2024
Viaarxiv icon

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

Add code
Jan 02, 2024
Viaarxiv icon

Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering

Add code
Dec 29, 2023
Figure 1 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Figure 2 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Figure 3 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Figure 4 for Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
Viaarxiv icon

GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length

Add code
Oct 01, 2023
Figure 1 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Figure 2 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Figure 3 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Figure 4 for GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length
Viaarxiv icon

Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond

Add code
Apr 27, 2023
Figure 1 for Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
Figure 2 for Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
Figure 3 for Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
Viaarxiv icon

Weight Perturbation Can Help Fairness under Distribution Shift

Add code
Mar 06, 2023
Figure 1 for Weight Perturbation Can Help Fairness under Distribution Shift
Figure 2 for Weight Perturbation Can Help Fairness under Distribution Shift
Figure 3 for Weight Perturbation Can Help Fairness under Distribution Shift
Figure 4 for Weight Perturbation Can Help Fairness under Distribution Shift
Viaarxiv icon

Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity

Add code
Jan 31, 2023
Figure 1 for Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity
Figure 2 for Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity
Figure 3 for Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity
Figure 4 for Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity
Viaarxiv icon

Disentangled Graph Collaborative Filtering

Add code
Jul 03, 2020
Figure 1 for Disentangled Graph Collaborative Filtering
Figure 2 for Disentangled Graph Collaborative Filtering
Figure 3 for Disentangled Graph Collaborative Filtering
Figure 4 for Disentangled Graph Collaborative Filtering
Viaarxiv icon