Alert button
Picture for Chan

Chan

Alert button

On the Opportunities of Green Computing: A Survey

Nov 09, 2023
You Zhou, Xiujing Lin, Xiang Zhang, Maolin Wang, Gangwei Jiang, Huakang Lu, Yupeng Wu, Kai Zhang, Zhe Yang, Kehang Wang, Yongduo Sui, Fengwei Jia, Zuoli Tang, Yao Zhao, Hongxuan Zhang, Tiannuo Yang, Weibo Chen, Yunong Mao, Yi Li, De Bao, Yu Li, Hongrui Liao, Ting Liu, Jingwen Liu, Jinchi Guo, Xiangyu Zhao, Ying WEI, Hong Qian, Qi Liu, Xiang Wang, Wai Kin, Chan, Chenliang Li, Yusen Li, Shiyu Yang, Jining Yan, Chao Mou, Shuai Han, Wuxia Jin, Guannan Zhang, Xiaodong Zeng

Figure 1 for On the Opportunities of Green Computing: A Survey
Figure 2 for On the Opportunities of Green Computing: A Survey
Figure 3 for On the Opportunities of Green Computing: A Survey
Figure 4 for On the Opportunities of Green Computing: A Survey

Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades, and is widely used in many areas including computing vision, natural language processing, time-series analysis, speech synthesis, etc. During the age of deep learning, especially with the arise of Large Language Models, a large majority of researchers' attention is paid on pursuing new state-of-the-art (SOTA) results, resulting in ever increasing of model size and computational complexity. The needs for high computing power brings higher carbon emission and undermines research fairness by preventing small or medium-sized research institutions and companies with limited funding in participating in research. To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic. In this survey, we give a systematic overview of the technologies used in Green Computing. We propose the framework of Green Computing and devide it into four key components: (1) Measures of Greenness, (2) Energy-Efficient AI, (3) Energy-Efficient Computing Systems and (4) AI Use Cases for Sustainability. For each components, we discuss the research progress made and the commonly used techniques to optimize the AI efficiency. We conclude that this new research direction has the potential to address the conflicts between resource constraints and AI development. We encourage more researchers to put attention on this direction and make AI more environmental friendly.

* 113 pages, 18 figures 
Viaarxiv icon

A Deep Learning Framework for Traffic Data Imputation Considering Spatiotemporal Dependencies

Apr 18, 2023
Li Jiang, Ting Zhang, Qiruyi Zuo, Chenyu Tian, George P. Chan, Wai Kin, Chan

Figure 1 for A Deep Learning Framework for Traffic Data Imputation Considering Spatiotemporal Dependencies
Figure 2 for A Deep Learning Framework for Traffic Data Imputation Considering Spatiotemporal Dependencies
Figure 3 for A Deep Learning Framework for Traffic Data Imputation Considering Spatiotemporal Dependencies
Figure 4 for A Deep Learning Framework for Traffic Data Imputation Considering Spatiotemporal Dependencies

Spatiotemporal (ST) data collected by sensors can be represented as multi-variate time series, which is a sequence of data points listed in an order of time. Despite the vast amount of useful information, the ST data usually suffer from the issue of missing or incomplete data, which also limits its applications. Imputation is one viable solution and is often used to prepossess the data for further applications. However, in practice, n practice, spatiotemporal data imputation is quite difficult due to the complexity of spatiotemporal dependencies with dynamic changes in the traffic network and is a crucial prepossessing task for further applications. Existing approaches mostly only capture the temporal dependencies in time series or static spatial dependencies. They fail to directly model the spatiotemporal dependencies, and the representation ability of the models is relatively limited.

* accepted at ICITE 2022 
Viaarxiv icon

Compacting, Picking and Growing for Unforgetting Continual Learning

Oct 15, 2019
Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming, Chan, Chu-Song Chen

Figure 1 for Compacting, Picking and Growing for Unforgetting Continual Learning
Figure 2 for Compacting, Picking and Growing for Unforgetting Continual Learning
Figure 3 for Compacting, Picking and Growing for Unforgetting Continual Learning
Figure 4 for Compacting, Picking and Growing for Unforgetting Continual Learning

Continual lifelong learning is essential to many applications. In this paper, we propose a simple but effective approach to continual deep learning. Our approach leverages the principles of deep model compression with weight pruning, critical weights selection, and progressive networks expansion. By enforcing their integration in an iterative manner, we introduce an incremental learning method that is scalable to the number of sequential tasks in a continual learning process. Our approach is easy to implement and owns several favorable characteristics. First, it can avoid forgetting (i.e., learn new tasks while remembering all previous tasks). Second, it allows model expansion but can maintain the model compactness when handling sequential tasks. Besides, through our compaction and selection/expanding mechanism, we show that the knowledge accumulated through learning previous tasks is helpful to adapt to a better model for the new tasks compared to training the models independently with tasks. Experimental results show that our approach can incrementally learn a deep model to tackle multiple tasks without forgetting, while the model compactness is maintained with the performance more satisfiable than ndividual task training.

* To appear in the thirty-third Conference on Neural Information Processing Systems (NeurIPS) 2019 
Viaarxiv icon