Picture for Shibo Jie

Shibo Jie

Mixture of Lookup Experts

Add code
Mar 20, 2025
Figure 1 for Mixture of Lookup Experts
Figure 2 for Mixture of Lookup Experts
Figure 3 for Mixture of Lookup Experts
Figure 4 for Mixture of Lookup Experts
Viaarxiv icon

SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs

Add code
Mar 20, 2025
Figure 1 for SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs
Figure 2 for SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs
Figure 3 for SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs
Figure 4 for SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs
Viaarxiv icon

Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning

Add code
Aug 13, 2024
Figure 1 for Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning
Figure 2 for Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning
Figure 3 for Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning
Figure 4 for Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning
Viaarxiv icon

Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning

Add code
May 09, 2024
Figure 1 for Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Figure 2 for Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Figure 3 for Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Figure 4 for Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Viaarxiv icon

Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

Add code
Jul 31, 2023
Figure 1 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Figure 2 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Figure 3 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Figure 4 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Viaarxiv icon

Detachedly Learn a Classifier for Class-Incremental Learning

Add code
Feb 23, 2023
Viaarxiv icon

FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer

Add code
Dec 06, 2022
Viaarxiv icon

Convolutional Bypasses Are Better Vision Transformer Adapters

Add code
Jul 18, 2022
Figure 1 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 2 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 3 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 4 for Convolutional Bypasses Are Better Vision Transformer Adapters
Viaarxiv icon

Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework

Add code
May 19, 2022
Figure 1 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 2 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 3 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 4 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Viaarxiv icon

Alleviating Representational Shift for Continual Fine-tuning

Add code
Apr 22, 2022
Figure 1 for Alleviating Representational Shift for Continual Fine-tuning
Figure 2 for Alleviating Representational Shift for Continual Fine-tuning
Figure 3 for Alleviating Representational Shift for Continual Fine-tuning
Figure 4 for Alleviating Representational Shift for Continual Fine-tuning
Viaarxiv icon