Picture for Shiqiang Wang

Shiqiang Wang

RCCDA: Adaptive Model Updates in the Presence of Concept Drift under a Constrained Resource Budget

Add code
May 30, 2025
Viaarxiv icon

Dynamically Learned Test-Time Model Routing in Language Model Zoos with Service Level Guarantees

Add code
May 26, 2025
Viaarxiv icon

Memory-Efficient Orthogonal Fine-Tuning with Principal Subspace Adaptation

Add code
May 16, 2025
Viaarxiv icon

IPBench: Benchmarking the Knowledge of Large Language Models in Intellectual Property

Add code
Apr 22, 2025
Viaarxiv icon

GneissWeb: Preparing High Quality Data for LLMs at Scale

Add code
Feb 19, 2025
Figure 1 for GneissWeb: Preparing High Quality Data for LLMs at Scale
Figure 2 for GneissWeb: Preparing High Quality Data for LLMs at Scale
Figure 3 for GneissWeb: Preparing High Quality Data for LLMs at Scale
Figure 4 for GneissWeb: Preparing High Quality Data for LLMs at Scale
Viaarxiv icon

Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining

Add code
Feb 10, 2025
Figure 1 for Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Figure 2 for Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Figure 3 for Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Figure 4 for Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
Viaarxiv icon

Parameter Tracking in Federated Learning with Adaptive Optimization

Add code
Feb 04, 2025
Figure 1 for Parameter Tracking in Federated Learning with Adaptive Optimization
Figure 2 for Parameter Tracking in Federated Learning with Adaptive Optimization
Figure 3 for Parameter Tracking in Federated Learning with Adaptive Optimization
Figure 4 for Parameter Tracking in Federated Learning with Adaptive Optimization
Viaarxiv icon

Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models

Add code
Jan 24, 2025
Figure 1 for Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models
Figure 2 for Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models
Figure 3 for Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models
Figure 4 for Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models
Viaarxiv icon

MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees

Add code
Oct 31, 2024
Figure 1 for MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees
Figure 2 for MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees
Figure 3 for MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees
Figure 4 for MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees
Viaarxiv icon

Vertical Federated Learning with Missing Features During Training and Inference

Add code
Oct 29, 2024
Figure 1 for Vertical Federated Learning with Missing Features During Training and Inference
Figure 2 for Vertical Federated Learning with Missing Features During Training and Inference
Figure 3 for Vertical Federated Learning with Missing Features During Training and Inference
Viaarxiv icon