Picture for Fei Yang

Fei Yang

Parallel Proportional Fusion of Spiking Quantum Neural Network for Optimizing Image Classification

Add code
Apr 01, 2024
Viaarxiv icon

Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning

Add code
Mar 12, 2024
Figure 1 for Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning
Figure 2 for Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning
Figure 3 for Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning
Figure 4 for Accelerated Inference and Reduced Forgetting: The Dual Benefits of Early-Exit Networks in Continual Learning
Viaarxiv icon

FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization

Add code
Feb 28, 2024
Figure 1 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Figure 2 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Figure 3 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Figure 4 for FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
Viaarxiv icon

Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment

Add code
Dec 11, 2023
Figure 1 for Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment
Figure 2 for Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment
Figure 3 for Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment
Figure 4 for Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment
Viaarxiv icon

Exploring Post-Training Quantization of Protein Language Models

Add code
Oct 30, 2023
Figure 1 for Exploring Post-Training Quantization of Protein Language Models
Figure 2 for Exploring Post-Training Quantization of Protein Language Models
Figure 3 for Exploring Post-Training Quantization of Protein Language Models
Figure 4 for Exploring Post-Training Quantization of Protein Language Models
Viaarxiv icon

Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing

Add code
Sep 27, 2023
Viaarxiv icon

ScrollNet: Dynamic Weight Importance for Continual Learning

Add code
Aug 31, 2023
Viaarxiv icon

Recognition of Mental Adjectives in An Efficient and Automatic Style

Add code
Jul 16, 2023
Viaarxiv icon

Where to Go Next for Recommender Systems? ID- vs. Modality-based recommender models revisited

Add code
Mar 24, 2023
Figure 1 for Where to Go Next for Recommender Systems? ID- vs. Modality-based recommender models revisited
Figure 2 for Where to Go Next for Recommender Systems? ID- vs. Modality-based recommender models revisited
Figure 3 for Where to Go Next for Recommender Systems? ID- vs. Modality-based recommender models revisited
Figure 4 for Where to Go Next for Recommender Systems? ID- vs. Modality-based recommender models revisited
Viaarxiv icon

Gated Class-Attention with Cascaded Feature Drift Compensation for Exemplar-free Continual Learning of Vision Transformers

Add code
Nov 22, 2022
Viaarxiv icon