Alert button
Picture for Chunxiao Jiang

Chunxiao Jiang

Alert button

Integrated Sensing-Communication-Computation for Edge Artificial Intelligence

Jun 01, 2023
Dingzhu Wen, Xiaoyang Li, Yong Zhou, Yuanming Shi, Sheng Wu, Chunxiao Jiang

Figure 1 for Integrated Sensing-Communication-Computation for Edge Artificial Intelligence
Figure 2 for Integrated Sensing-Communication-Computation for Edge Artificial Intelligence
Figure 3 for Integrated Sensing-Communication-Computation for Edge Artificial Intelligence
Figure 4 for Integrated Sensing-Communication-Computation for Edge Artificial Intelligence

Edge artificial intelligence (AI) has been a promising solution towards 6G to empower a series of advanced techniques such as digital twin, holographic projection, semantic communications, and auto-driving, for achieving intelligence of everything. The performance of edge AI tasks, including edge learning and edge AI inference, depends on the quality of three highly coupled processes, i.e., sensing for data acquisition, computation for information extraction, and communication for information transmission. However, these three modules need to compete for network resources for enhancing their own quality-of-services. To this end, integrated sensing-communication-computation (ISCC) is of paramount significance for improving resource utilization as well as achieving the customized goals of edge AI tasks. By investigating the interplay among the three modules, this article presents various kinds of ISCC schemes for federated edge learning tasks and edge AI inference tasks in both application and physical layers.

Viaarxiv icon

Hybrid Driven Learning for Channel Estimation in Intelligent Reflecting Surface Aided Millimeter Wave Communications

May 30, 2023
Shuntian Zheng, Sheng Wu, Chunxiao Jiang, Wei Zhang, Xiaojun Jing

Figure 1 for Hybrid Driven Learning for Channel Estimation in Intelligent Reflecting Surface Aided Millimeter Wave Communications
Figure 2 for Hybrid Driven Learning for Channel Estimation in Intelligent Reflecting Surface Aided Millimeter Wave Communications
Figure 3 for Hybrid Driven Learning for Channel Estimation in Intelligent Reflecting Surface Aided Millimeter Wave Communications
Figure 4 for Hybrid Driven Learning for Channel Estimation in Intelligent Reflecting Surface Aided Millimeter Wave Communications

Intelligent reflecting surfaces (IRS) have been proposed in millimeter wave (mmWave) and terahertz (THz) systems to achieve both coverage and capacity enhancement, where the design of hybrid precoders, combiners, and the IRS typically relies on channel state information. In this paper, we address the problem of uplink wideband channel estimation for IRS aided multiuser multiple-input single-output (MISO) systems with hybrid architectures. Combining the structure of model driven and data driven deep learning approaches, a hybrid driven learning architecture is devised for joint estimation and learning the properties of the channels. For a passive IRS aided system, we propose a residual learned approximate message passing as a model driven network. A denoising and attention network in the data driven network is used to jointly learn spatial and frequency features. Furthermore, we design a flexible hybrid driven network in a hybrid passive and active IRS aided system. Specifically, the depthwise separable convolution is applied to the data driven network, leading to less network complexity and fewer parameters at the IRS side. Numerical results indicate that in both systems, the proposed hybrid driven channel estimation methods significantly outperform existing deep learning-based schemes and effectively reduce the pilot overhead by about 60% in IRS aided systems.

* 30 pages, 8 figures, submitted to IEEE transactions on wireless communications on December 13, 2022 
Viaarxiv icon

Age of Incorrect Information in Semantic Communications for NOMA Aided XR Applications

May 16, 2023
Jianrui Chen, Jingjing Wang, Chunxiao Jiang, Jiaxing Wang

Figure 1 for Age of Incorrect Information in Semantic Communications for NOMA Aided XR Applications
Figure 2 for Age of Incorrect Information in Semantic Communications for NOMA Aided XR Applications
Figure 3 for Age of Incorrect Information in Semantic Communications for NOMA Aided XR Applications
Figure 4 for Age of Incorrect Information in Semantic Communications for NOMA Aided XR Applications

As an evolving successor to the mobile Internet, the extended reality (XR) devices can generate a fully digital immersive environment similar to the real world, integrating integrating virtual and real-world elements. However, in addition to the difficulties encountered in traditional communications, there emerge a range of new challenges such as ultra-massive access, real-time synchronization as well as unprecedented amount of multi-modal data transmission and processing. To address these challenges, semantic communications might be harnessed in support of XR applications, whereas it lacks a practical and effective performance metric. For broadening a new path for evaluating semantic communications, in this paper, we construct a multi-user uplink non-orthogonal multiple access (NOMA) system to analyze its transmission performance by harnessing a novel metric called age of incorrect information (AoII). First, we derive the average semantic similarity of all the users based on DeepSC and obtain the closed-form expressions for the packets' age of information (AoI) relying on queue theory. Besides, we formulate a non-convex optimization problem for the proposed AoII which combines both error-and AoI-based performance under the constraints of semantic rate, transmit power and status update rate. Finally, in order to solve the problem, we apply an exact linear search based algorithm for finding the optimal policy. Simulation results show that the AoII metric can beneficially evaluate both the error- and AoI-based transmission performance simultaneously.

Viaarxiv icon

Trust-Worthy Semantic Communications for the Metaverse Relying on Federated Learning

May 16, 2023
Jianrui Chen, Jingjing Wang, Chunxiao Jiang, Yong Ren, Lajos Hanzo

Figure 1 for Trust-Worthy Semantic Communications for the Metaverse Relying on Federated Learning
Figure 2 for Trust-Worthy Semantic Communications for the Metaverse Relying on Federated Learning
Figure 3 for Trust-Worthy Semantic Communications for the Metaverse Relying on Federated Learning
Figure 4 for Trust-Worthy Semantic Communications for the Metaverse Relying on Federated Learning

As an evolving successor to the mobile Internet, the Metaverse creates the impression of an immersive environment, integrating the virtual as well as the real world. In contrast to the traditional mobile Internet based on servers, the Metaverse is constructed by billions of cooperating users by harnessing their smart edge devices having limited communication and computation resources. In this immersive environment an unprecedented amount of multi-modal data has to be processed. To circumvent this impending bottleneck, low-rate semantic communication might be harnessed in support of the Metaverse. But given that private multi-modal data is exchanged in the Metaverse, we have to guard against security breaches and privacy invasions. Hence we conceive a trust-worthy semantic communication system for the Metaverse based on a federated learning architecture by exploiting its distributed decision-making and privacy-preserving capability. We conclude by identifying a suite of promising research directions and open issues.

Viaarxiv icon

Vertical Federated Learning over Cloud-RAN: Convergence Analysis and System Optimization

May 04, 2023
Yuanming Shi, Shuhao Xia, Yong Zhou, Yijie Mao, Chunxiao Jiang, Meixia Tao

Figure 1 for Vertical Federated Learning over Cloud-RAN: Convergence Analysis and System Optimization
Figure 2 for Vertical Federated Learning over Cloud-RAN: Convergence Analysis and System Optimization
Figure 3 for Vertical Federated Learning over Cloud-RAN: Convergence Analysis and System Optimization
Figure 4 for Vertical Federated Learning over Cloud-RAN: Convergence Analysis and System Optimization

Vertical federated learning (FL) is a collaborative machine learning framework that enables devices to learn a global model from the feature-partition datasets without sharing local raw data. However, as the number of the local intermediate outputs is proportional to the training samples, it is critical to develop communication-efficient techniques for wireless vertical FL to support high-dimensional model aggregation with full device participation. In this paper, we propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation by leveraging over-the-air computation (AirComp) and alleviating communication straggler issue with cooperative model aggregation among geographically distributed edge servers. However, the model aggregation error caused by AirComp and quantization errors caused by the limited fronthaul capacity degrade the learning performance for vertical FL. To address these issues, we characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions. To improve the learning performance, we establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed. We conduct extensive simulations to demonstrate the effectiveness of the proposed system architecture and optimization framework for vertical FL.

* 32 pages, 7 figures 
Viaarxiv icon

Over-the-Air Computation: Foundations, Technologies, and Applications

Oct 19, 2022
Zhibin Wang, Yapeng Zhao, Yong Zhou, Yuanming Shi, Chunxiao Jiang, Khaled B. Letaief

Figure 1 for Over-the-Air Computation: Foundations, Technologies, and Applications
Figure 2 for Over-the-Air Computation: Foundations, Technologies, and Applications
Figure 3 for Over-the-Air Computation: Foundations, Technologies, and Applications
Figure 4 for Over-the-Air Computation: Foundations, Technologies, and Applications

The rapid advancement of artificial intelligence technologies has given rise to diversified intelligent services, which place unprecedented demands on massive connectivity and gigantic data aggregation. However, the scarce radio resources and stringent latency requirement make it challenging to meet these demands. To tackle these challenges, over-the-air computation (AirComp) emerges as a potential technology. Specifically, AirComp seamlessly integrates the communication and computation procedures through the superposition property of multiple-access channels, which yields a revolutionary multiple-access paradigm shift from "compute-after-communicate" to "compute-when-communicate". Meanwhile, low-latency and spectral-efficient wireless data aggregation can be achieved via AirComp by allowing multiple devices to access the wireless channels non-orthogonally. In this paper, we aim to present the recent advancement of AirComp in terms of foundations, technologies, and applications. The mathematical form and communication design are introduced as the foundations of AirComp, and the critical issues of AirComp over different network architectures are then discussed along with the review of existing literature. The technologies employed for the analysis and optimization on AirComp are reviewed from the information theory and signal processing perspectives. Moreover, we present the existing studies that tackle the practical implementation issues in AirComp systems, and elaborate the applications of AirComp in Internet of Things and edge intelligent networks. Finally, potential research directions are highlighted to motivate the future development of AirComp.

Viaarxiv icon

A Multi-Domain VNE Algorithm based on Load Balancing in the IoT networks

Feb 07, 2022
Peiying Zhang, Fanglin Liu, Chunxiao Jiang, Abderrahim Benslimane, Juan-Luis Gorricho, Joan Serrat-Fernacute

Figure 1 for A Multi-Domain VNE Algorithm based on Load Balancing in the IoT networks
Figure 2 for A Multi-Domain VNE Algorithm based on Load Balancing in the IoT networks
Figure 3 for A Multi-Domain VNE Algorithm based on Load Balancing in the IoT networks
Figure 4 for A Multi-Domain VNE Algorithm based on Load Balancing in the IoT networks

Virtual network embedding is one of the key problems of network virtualization. Since virtual network mapping is an NP-hard problem, a lot of research has focused on the evolutionary algorithm's masterpiece genetic algorithm. However, the parameter setting in the traditional method is too dependent on experience, and its low flexibility makes it unable to adapt to increasingly complex network environments. In addition, link-mapping strategies that do not consider load balancing can easily cause link blocking in high-traffic environments. In the IoT environment involving medical, disaster relief, life support and other equipment, network performance and stability are particularly important. Therefore, how to provide a more flexible virtual network mapping service in a heterogeneous network environment with large traffic is an urgent problem. Aiming at this problem, a virtual network mapping strategy based on hybrid genetic algorithm is proposed. This strategy uses a dynamically calculated cross-probability and pheromone-based mutation gene selection strategy to improve the flexibility of the algorithm. In addition, a weight update mechanism based on load balancing is introduced to reduce the probability of mapping failure while balancing the load. Simulation results show that the proposed method performs well in a number of performance metrics including mapping average quotation, link load balancing, mapping cost-benefit ratio, acceptance rate and running time.

Viaarxiv icon

Semantic Similarity Computing Model Based on Multi Model Fine-Grained Nonlinear Fusion

Feb 05, 2022
Peiying Zhang, Xingzhe Huang, Yaqi Wang, Chunxiao Jiang, Shuqing He, Haifeng Wang

Figure 1 for Semantic Similarity Computing Model Based on Multi Model Fine-Grained Nonlinear Fusion
Figure 2 for Semantic Similarity Computing Model Based on Multi Model Fine-Grained Nonlinear Fusion
Figure 3 for Semantic Similarity Computing Model Based on Multi Model Fine-Grained Nonlinear Fusion
Figure 4 for Semantic Similarity Computing Model Based on Multi Model Fine-Grained Nonlinear Fusion

Natural language processing (NLP) task has achieved excellent performance in many fields, including semantic understanding, automatic summarization, image recognition and so on. However, most of the neural network models for NLP extract the text in a fine-grained way, which is not conducive to grasp the meaning of the text from a global perspective. To alleviate the problem, the combination of the traditional statistical method and deep learning model as well as a novel model based on multi model nonlinear fusion are proposed in this paper. The model uses the Jaccard coefficient based on part of speech, Term Frequency-Inverse Document Frequency (TF-IDF) and word2vec-CNN algorithm to measure the similarity of sentences respectively. According to the calculation accuracy of each model, the normalized weight coefficient is obtained and the calculation results are compared. The weighted vector is input into the fully connected neural network to give the final classification results. As a result, the statistical sentence similarity evaluation algorithm reduces the granularity of feature extraction, so it can grasp the sentence features globally. Experimental results show that the matching of sentence similarity calculation method based on multi model nonlinear fusion is 84%, and the F1 value of the model is 75%.

Viaarxiv icon

Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT

Feb 03, 2022
Peiying Zhang, Chao Wang, Chunxiao Jiang, Zhu Han

Figure 1 for Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT
Figure 2 for Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT
Figure 3 for Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT
Figure 4 for Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT

The continuous expanded scale of the industrial Internet of Things (IIoT) leads to IIoT equipments generating massive amounts of user data every moment. According to the different requirement of end users, these data usually have high heterogeneity and privacy, while most of users are reluctant to expose them to the public view. How to manage these time series data in an efficient and safe way in the field of IIoT is still an open issue, such that it has attracted extensive attention from academia and industry. As a new machine learning (ML) paradigm, federated learning (FL) has great advantages in training heterogeneous and private data. This paper studies the FL technology applications to manage IIoT equipment data in wireless network environments. In order to increase the model aggregation rate and reduce communication costs, we apply deep reinforcement learning (DRL) to IIoT equipment selection process, specifically to select those IIoT equipment nodes with accurate models. Therefore, we propose a FL algorithm assisted by DRL, which can take into account the privacy and efficiency of data training of IIoT equipment. By analyzing the data characteristics of IIoT equipments, we use MNIST, fashion MNIST and CIFAR-10 data sets to represent the data generated by IIoT. During the experiment, we employ the deep neural network (DNN) model to train the data, and experimental results show that the accuracy can reach more than 97\%, which corroborates the effectiveness of the proposed algorithm.

Viaarxiv icon

Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning

Feb 03, 2022
Peiying Zhang, Chao Wang, Chunxiao Jiang, Abderrahim Benslimane

Figure 1 for Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning
Figure 2 for Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning
Figure 3 for Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning
Figure 4 for Security-Aware Virtual Network Embedding Algorithm based on Reinforcement Learning

Virtual network embedding (VNE) algorithm is always the key problem in network virtualization (NV) technology. At present, the research in this field still has the following problems. The traditional way to solve VNE problem is to use heuristic algorithm. However, this method relies on manual embedding rules, which does not accord with the actual situation of VNE. In addition, as the use of intelligent learning algorithm to solve the problem of VNE has become a trend, this method is gradually outdated. At the same time, there are some security problems in VNE. However, there is no intelligent algorithm to solve the security problem of VNE. For this reason, this paper proposes a security-aware VNE algorithm based on reinforcement learning (RL). In the training phase, we use a policy network as a learning agent and take the extracted attributes of the substrate nodes to form a feature matrix as input. The learning agent is trained in this environment to get the mapping probability of each substrate node. In the test phase, we map nodes according to the mapping probability and use the breadth-first strategy (BFS) to map links. For the security problem, we add security requirements level constraint for each virtual node and security level constraint for each substrate node. Virtual nodes can only be embedded on substrate nodes that are not lower than the level of security requirements. Experimental results show that the proposed algorithm is superior to other typical algorithms in terms of long-term average return, long-term revenue consumption ratio and virtual network request (VNR) acceptance rate.

Viaarxiv icon