Alert button
Picture for Yue Cheng

Yue Cheng

Alert button

Everything You Always Wanted to Know About Storage Compressibility of Pre-Trained ML Models but Were Afraid to Ask

Add code
Bookmark button
Alert button
Feb 20, 2024
Zhaoyuan Su, Ammar Ahmed, Zirui Wang, Ali Anwar, Yue Cheng

Viaarxiv icon

Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models

Add code
Bookmark button
Alert button
Jan 04, 2024
Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao

Viaarxiv icon

Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction

Add code
Bookmark button
Alert button
Aug 25, 2023
Guangji Bai, Ziyang Yu, Zheng Chai, Yue Cheng, Liang Zhao

Figure 1 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Figure 2 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Figure 3 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Figure 4 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Viaarxiv icon

Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization

Add code
Bookmark button
Alert button
May 31, 2022
Zheng Chai, Guangji Bai, Liang Zhao, Yue Cheng

Figure 1 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 2 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 3 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 4 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Viaarxiv icon

A Distributed and Elastic Aggregation Service for Scalable Federated Learning Systems

Add code
Bookmark button
Alert button
Apr 16, 2022
Ahmad Khan, Yuze Li, Ali Anwar, Yue Cheng, Thang Hoang, Nathalie Baracaldo, Ali Butt

Figure 1 for A Distributed and Elastic Aggregation Service for Scalable Federated Learning Systems
Figure 2 for A Distributed and Elastic Aggregation Service for Scalable Federated Learning Systems
Figure 3 for A Distributed and Elastic Aggregation Service for Scalable Federated Learning Systems
Figure 4 for A Distributed and Elastic Aggregation Service for Scalable Federated Learning Systems
Viaarxiv icon

Community-based Layerwise Distributed Training of Graph Convolutional Networks

Add code
Bookmark button
Alert button
Dec 17, 2021
Hongyi Li, Junxiang Wang, Yongchao Wang, Yue Cheng, Liang Zhao

Figure 1 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Figure 2 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Figure 3 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Figure 4 for Community-based Layerwise Distributed Training of Graph Convolutional Networks
Viaarxiv icon

Asynchronous Federated Learning for Sensor Data with Concept Drift

Add code
Bookmark button
Alert button
Sep 01, 2021
Yujing Chen, Zheng Chai, Yue Cheng, Huzefa Rangwala

Figure 1 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 2 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 3 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 4 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Viaarxiv icon

Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework

Add code
Bookmark button
Alert button
May 20, 2021
Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao

Figure 1 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 2 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 3 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 4 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Viaarxiv icon

FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data

Add code
Bookmark button
Alert button
Oct 12, 2020
Zheng Chai, Yujing Chen, Liang Zhao, Yue Cheng, Huzefa Rangwala

Figure 1 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 2 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 3 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 4 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Viaarxiv icon

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

Add code
Bookmark button
Alert button
Sep 16, 2020
Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

Figure 1 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 2 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 3 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Viaarxiv icon