Picture for Lei Wang

Lei Wang

Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences

Computation and Communication Efficient Lightweighting Vertical Federated Learning

Add code
Mar 30, 2024
Figure 1 for Computation and Communication Efficient Lightweighting Vertical Federated Learning
Figure 2 for Computation and Communication Efficient Lightweighting Vertical Federated Learning
Figure 3 for Computation and Communication Efficient Lightweighting Vertical Federated Learning
Viaarxiv icon

PCToolkit: A Unified Plug-and-Play Prompt Compression Toolkit of Large Language Models

Add code
Mar 26, 2024
Viaarxiv icon

Space Group Informed Transformer for Crystalline Materials Generation

Add code
Mar 23, 2024
Viaarxiv icon

View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network

Add code
Mar 21, 2024
Figure 1 for View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network
Figure 2 for View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network
Figure 3 for View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network
Figure 4 for View-decoupled Transformer for Person Re-identification under Aerial-ground Camera Network
Viaarxiv icon

The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation

Add code
Mar 15, 2024
Figure 1 for The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation
Figure 2 for The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation
Figure 3 for The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation
Figure 4 for The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation
Viaarxiv icon

Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains

Add code
Mar 14, 2024
Figure 1 for Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains
Figure 2 for Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains
Figure 3 for Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains
Figure 4 for Taming Cross-Domain Representation Variance in Federated Prototype Learning with Heterogeneous Data Domains
Viaarxiv icon

Gradient-Aware Logit Adjustment Loss for Long-tailed Classifier

Add code
Mar 14, 2024
Figure 1 for Gradient-Aware Logit Adjustment Loss for Long-tailed Classifier
Figure 2 for Gradient-Aware Logit Adjustment Loss for Long-tailed Classifier
Figure 3 for Gradient-Aware Logit Adjustment Loss for Long-tailed Classifier
Figure 4 for Gradient-Aware Logit Adjustment Loss for Long-tailed Classifier
Viaarxiv icon

Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack

Add code
Mar 14, 2024
Figure 1 for Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
Figure 2 for Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
Figure 3 for Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
Figure 4 for Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
Viaarxiv icon

Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning

Add code
Feb 29, 2024
Viaarxiv icon

All in a Single Image: Large Multimodal Models are In-Image Learners

Add code
Feb 28, 2024
Figure 1 for All in a Single Image: Large Multimodal Models are In-Image Learners
Figure 2 for All in a Single Image: Large Multimodal Models are In-Image Learners
Figure 3 for All in a Single Image: Large Multimodal Models are In-Image Learners
Figure 4 for All in a Single Image: Large Multimodal Models are In-Image Learners
Viaarxiv icon