This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system. Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively, to minimize the objective of system subject to the computation/communication budget and a target latency requirement. In particular, mobile devices are connect via wireless TCP/IP architectures. Exploiting the optimization problem structure, the problem can be decomposed to two convex sub-problems. Drawing on the Lagrangian dual and harmony search techniques, we characterize the global optimal solution by the closed-form solutions to all sub-problems, which give qualitative insights to multi-resource tradeoff. Numerical simulations are used to validate the analysis and assess the performance of the proposed algorithm.
Semantic communication has recently attracted significant interest from both industry and academia due to its potential to transform the existing data-focused communication architecture towards a more generally intelligent and goal-oriented semantic-aware networking system. Despite its promising potential, semantic communications and semantic-aware networking are still at their infancy. Most existing works focus on transporting and delivering the explicit semantic information, e.g., labels or features of objects, that can be directly identified from the source signal. The original definition of semantics as well as recent results in cognitive neuroscience suggest that it is the implicit semantic information, in particular the hidden relations connecting different concepts and feature items that plays the fundamental role in recognizing, communicating, and delivering the real semantic meanings of messages. Motivated by this observation, we propose a novel reasoning-based implicit semantic-aware communication network architecture that allows multiple tiers of CDC and edge servers to collaborate and support efficient semantic encoding, decoding, and interpretation for end-users. We introduce a new multi-layer representation of semantic information taking into consideration both the hierarchical structure of implicit semantics as well as the personalized inference preference of individual users. We model the semantic reasoning process as a reinforcement learning process and then propose an imitation-based semantic reasoning mechanism learning (iRML) solution for the edge servers to leaning a reasoning policy that imitates the inference behavior of the source user. A federated GCN-based collaborative reasoning solution is proposed to allow multiple edge servers to jointly construct a shared semantic interpretation model based on decentralized knowledge datasets.
Federated edge learning (FEL) is a promising paradigm of distributed machine learning that can preserve data privacy while training the global model collaboratively. However, FEL is still facing model confidentiality issues due to eavesdropping risks of exchanging cryptographic keys through traditional encryption schemes. Therefore, in this paper, we propose a hierarchical architecture for quantum-secured FEL systems with ideal security based on the quantum key distribution (QKD) to facilitate public key and model encryption against eavesdropping attacks. Specifically, we propose a stochastic resource allocation model for efficient QKD to encrypt FEL keys and models. In FEL systems, remote FEL workers are connected to cluster heads via quantum-secured channels to train an aggregated global model collaboratively. However, due to the unpredictable number of workers at each location, the demand for secret-key rates to support secure model transmission to the server is unpredictable. The proposed systems need to efficiently allocate limited QKD resources (i.e., wavelengths) such that the total cost is minimized in the presence of stochastic demand by formulating the optimization problem for the proposed architecture as a stochastic programming model. To this end, we propose a federated reinforcement learning-based resource allocation scheme to solve the proposed model without complete state information. The proposed scheme enables QKD managers and controllers to train a global QKD resource allocation policy while keeping their private experiences local. Numerical results demonstrate that the proposed schemes can successfully achieve the cost-minimizing objective under uncertain demand while improving the training efficiency by about 50\% compared to state-of-the-art schemes.
We exploit both covert communication and friendly jamming to propose a friendly jamming-assisted covert communication and use it to doubly secure a large-scale device-to-device (D2D) network against eavesdroppers (i.e., wardens). The D2D transmitters defend against the wardens by: 1) hiding their transmissions with enhanced covert communication, and 2) leveraging friendly jamming to ensure information secrecy even if the D2D transmissions are detected. We model the combat between the wardens and the D2D network (the transmitters and the friendly jammers) as a two-stage Stackelberg game. Therein, the wardens are the followers at the lower stage aiming to minimize their detection errors, and the D2D network is the leader at the upper stage aiming to maximize its utility (in terms of link reliability and communication security) subject to the constraint on communication covertness. We apply stochastic geometry to model the network spatial configuration so as to conduct a system-level study. We develop a bi-level optimization algorithm to search for the equilibrium of the proposed Stackelberg game based on the successive convex approximation (SCA) method and Rosenbrock method. Numerical results reveal interesting insights. We observe that without the assistance from the jammers, it is difficult to achieve covert communication on D2D transmission. Moreover, we illustrate the advantages of the proposed friendly jamming-assisted covert communication by comparing it with the information-theoretical secrecy approach in terms of the secure communication probability and network utility.
Semantic communication, as a promising technology, has emerged to break through the Shannon limit, which is envisioned as the key enabler and fundamental paradigm for future 6G networks and applications, e.g., smart healthcare. In this paper, we focus on UAV image-sensing-driven task-oriented semantic communications scenarios. The majority of existing work has focused on designing advanced algorithms for high-performance semantic communication. However, the challenges, such as energy-hungry and efficiency-limited image retrieval manner, and semantic encoding without considering user personality, have not been explored yet. These challenges have hindered the widespread adoption of semantic communication. To address the above challenges, at the semantic level, we first design an energy-efficient task-oriented semantic communication framework with a triple-based {\color{black}scene graph} for image information. We then design a new personalized semantic encoder based on user interests to meet the requirements of personalized saliency. Moreover, at the communication level, we study the effects of dynamic wireless fading channels on semantic transmission mathematically and thus design an optimal multi-user resource allocation scheme by using game theory. Numerical results based on real-world datasets clearly indicate that the proposed framework and schemes significantly enhance the personalization and anti-interference performance of semantic communication, and are also efficient to improve the communication quality of semantic communication services.
Extremely large-scale multiple-input-multiple-output (XL-MIMO) is a promising technology to empower the next-generation communications. However, XL-MIMO, which is still in its early stage of research, has been designed with a variety of hardware and performance analysis schemes. To illustrate the differences and similarities among these schemes, we comprehensively review existing XL-MIMO hardware designs and characteristics in this article. Then, we thoroughly discuss the research status of XL-MIMO from "channel modeling", "performance analysis", and "signal processing". Several existing challenges are introduced and respective solutions are provided. We then propose two case studies for the hybrid propagation channel modeling and the effective degrees of freedom (EDoF) computations for practical scenarios. Using our proposed solutions, we perform numerical results to investigate the EDoF performance for the scenarios with unparallel XL-MIMO surfaces and multiple user equipment, respectively. Finally, we discuss several future research directions.
In this paper, a semantic communication framework is proposed for textual data transmission. In the studied model, a base station (BS) extracts the semantic information from textual data, and transmits it to each user. The semantic information is modeled by a knowledge graph (KG) that consists of a set of semantic triples. After receiving the semantic information, each user recovers the original text using a graph-to-text generation model. To measure the performance of the considered semantic communication framework, a metric of semantic similarity (MSS) that jointly captures the semantic accuracy and completeness of the recovered text is proposed. Due to wireless resource limitations, the BS may not be able to transmit the entire semantic information to each user and satisfy the transmission delay constraint. Hence, the BS must select an appropriate resource block for each user as well as determine and transmit part of the semantic information to the users. As such, we formulate an optimization problem whose goal is to maximize the total MSS by jointly optimizing the resource allocation policy and determining the partial semantic information to be transmitted. To solve this problem, a proximal-policy-optimization-based reinforcement learning (RL) algorithm integrated with an attention network is proposed. The proposed algorithm can evaluate the importance of each triple in the semantic information using an attention network and then, build a relationship between the importance distribution of the triples in the semantic information and the total MSS. Compared to traditional RL algorithms, the proposed algorithm can dynamically adjust its learning rate thus ensuring convergence to a locally optimal solution.
As a virtual world interacting with the real world, Metaverse encapsulates our expectations of the next-generation Internet, bringing new key performance indicators (KPIs). Especially, Metaverse services based on graphical technologies, e.g., virtual traveling, require the low latency of virtual object data transmitting and the high reliability of user instruction uploading. Although conventional ultra-reliable and low-latency communications (URLLC) can satisfy the vast majority of objective service KPIs, it is difficult to offer users a personalized immersive experience that is a distinctive feature of next-generation Internet services. Since the quality of experience (QoE) can be regarded as a comprehensive KPI, the URLLC is evolved towards the next generation URLLC (xURLLC) to achieve higher QoE for Metaverse services by allocating more resources to virtual objects in which users are more interested. In this paper, we study the interaction between the Metaverse service provider (MSP) and the network infrastructure provider (InP) to deploy Metaverse xURLLC services. An optimal contract design framework is provided. Specifically, the utility of the MSP, defined as a function of Metaverse users' QoE, is to be maximized, while ensuring the incentives of the InP. To model the QoE of Metaverse xURLLC services, we propose a novel metric named Meta-Immersion that incorporates both the objective network KPIs and subjective feelings of Metaverse users. Using a user-object-attention level (UOAL) dataset, we develop and validate an attention-aware rendering capacity allocation scheme to improve QoE. It is shown that an average of 20.1% QoE improvement is achieved by the xURLLC compared to the conventional URLLC with the uniform allocation scheme. A higher percentage of QoE improvement, e.g., 40%, is achieved when the total resources are limited.
Semantic communication technologies enable wireless edge devices to communicate effectively by transmitting semantic meaning of data. Edge components, such as vehicles in next-generation intelligent transport systems, use well-trained semantic models to encode and decode semantic information extracted from raw and sensor data. However, the limitation in computing resources makes it difficult to support the training process of accurate semantic models on edge devices. As such, edge devices can buy the pretrained semantic models from semantic model providers, which is called "semantic model trading". Upon collecting semantic information with the semantic models, the edge devices can then sell the extracted semantic information, e.g., information about urban road conditions or traffic signs, to the interested buyers for profit, which is called "semantic information trading". To facilitate both types of the trades, effective incentive mechanisms should be designed. Thus, in this paper, we propose a hierarchical trading system to support both semantic model trading and semantic information trading jointly. The proposed incentive mechanism helps to maximize the revenue of semantic model providers in the semantic model trading, and effectively incentivizes model providers to participate in the development of semantic communication systems. For semantic information trading, our designed auction approach can support the trading between multiple semantic information sellers and buyers, while ensuring individual rationality, incentive compatibility, and budget balance, and moreover, allowing them achieve higher utilities than the baseline method.
Emerging with the support of computing and communications technologies, Metaverse is expected to bring users unprecedented service experiences. However, the increase in the number of Metaverse users places a heavy demand on network resources, especially for Metaverse services that are based on graphical extended reality and require rendering a plethora of virtual objects. To make efficient use of network resources and improve the Quality-of-Experience (QoE), we design an attention-aware network resource allocation scheme to achieve customized Metaverse services. The aim is to allocate more network resources to virtual objects in which users are more interested. We first discuss several key techniques related to Metaverse services, including QoE analysis, eye-tracking, and remote rendering. We then review existing datasets and propose the user-object-attention level (UOAL) dataset that contains the ground truth attention of 30 users to 96 objects in 1,000 images. A tutorial on how to use UOAL is presented. With the help of UOAL, we propose an attention-aware network resource allocation algorithm that has two steps, i.e., attention prediction and QoE maximization. Specially, we provide an overview of the designs of two types of attention prediction methods, i.e., interest-aware and time-aware prediction. By using the predicted user-object-attention values, network resources such as the rendering capacity of edge devices can be allocated optimally to maximize the QoE. Finally, we propose promising research directions related to Metaverse services.