Alert button
Picture for Zheng Chai

Zheng Chai

Alert button

Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models

Add code
Bookmark button
Alert button
Jan 04, 2024
Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao

Viaarxiv icon

Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction

Add code
Bookmark button
Alert button
Aug 25, 2023
Guangji Bai, Ziyang Yu, Zheng Chai, Yue Cheng, Liang Zhao

Figure 1 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Figure 2 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Figure 3 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Figure 4 for Staleness-Alleviated Distributed GNN Training via Online Dynamic-Embedding Prediction
Viaarxiv icon

Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization

Add code
Bookmark button
Alert button
May 31, 2022
Zheng Chai, Guangji Bai, Liang Zhao, Yue Cheng

Figure 1 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 2 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 3 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Figure 4 for Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization
Viaarxiv icon

LOF: Structure-Aware Line Tracking based on Optical Flow

Add code
Bookmark button
Alert button
Sep 17, 2021
Meixiang Quan, Zheng Chai, Xiao Liu

Figure 1 for LOF: Structure-Aware Line Tracking based on Optical Flow
Figure 2 for LOF: Structure-Aware Line Tracking based on Optical Flow
Figure 3 for LOF: Structure-Aware Line Tracking based on Optical Flow
Figure 4 for LOF: Structure-Aware Line Tracking based on Optical Flow
Viaarxiv icon

Asynchronous Federated Learning for Sensor Data with Concept Drift

Add code
Bookmark button
Alert button
Sep 01, 2021
Yujing Chen, Zheng Chai, Yue Cheng, Huzefa Rangwala

Figure 1 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 2 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 3 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Figure 4 for Asynchronous Federated Learning for Sensor Data with Concept Drift
Viaarxiv icon

Method Towards CVPR 2021 Image Matching Challenge

Add code
Bookmark button
Alert button
Aug 11, 2021
Xiaopeng Bi, Yu Chen, Xinyang Liu, Dehao Zhang, Ran Yan, Zheng Chai, Haotian Zhang, Xiao Liu

Figure 1 for Method Towards CVPR 2021 Image Matching Challenge
Figure 2 for Method Towards CVPR 2021 Image Matching Challenge
Viaarxiv icon

Method Towards CVPR 2021 SimLocMatch Challenge

Add code
Bookmark button
Alert button
Aug 11, 2021
Xiaopeng Bi, Ran Yan, Zheng Chai, Haotian Zhang, Xiao Liu

Figure 1 for Method Towards CVPR 2021 SimLocMatch Challenge
Viaarxiv icon

Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework

Add code
Bookmark button
Alert button
May 20, 2021
Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao

Figure 1 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 2 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 3 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Figure 4 for Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework
Viaarxiv icon

FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data

Add code
Bookmark button
Alert button
Oct 12, 2020
Zheng Chai, Yujing Chen, Liang Zhao, Yue Cheng, Huzefa Rangwala

Figure 1 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 2 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 3 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Figure 4 for FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
Viaarxiv icon

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

Add code
Bookmark button
Alert button
Sep 16, 2020
Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

Figure 1 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 2 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Figure 3 for Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training
Viaarxiv icon