Intelligent task-oriented semantic communications (SemComs) have witnessed great progress with the development of deep learning (DL). In this paper, we propose a semantic-aware hybrid automatic repeat request (SemHARQ) framework for the robust and efficient transmissions of semantic features. First, to improve the robustness and effectiveness of semantic coding, a multi-task semantic encoder is proposed. Meanwhile, a feature importance ranking (FIR) method is investigated to ensure the important features delivery under limited channel resources. Then, to accurately detect the possible transmission errors, a novel feature distortion evaluation (FDE) network is designed to identify the distortion level of each feature, based on which an efficient HARQ method is proposed. Specifically, the corrupted features are retransmitted, where the remaining channel resources are used for incremental transmissions. The system performance is evaluated under different channel conditions in multi-task scenarios in Internet of Vehicles. Extensive experiments show that the proposed framework outperforms state-of-the-art works by more than 20% in rank-1 accuracy for vehicle re-identification, and 10% in vehicle color classification accuracy in the low signal-to-noise ratio regime.
Semantic communications are expected to become the core new paradigms of the sixth generation (6G) wireless networks. Most existing works implicitly utilize channel information for codecs training, which leads to poor communications when channel type or statistical characteristics change. To tackle this issue posed by various channels, a novel channel-transferable semantic communications (CT-SemCom) framework is proposed, which adapts the codecs learned on one type of channel to other types of channels. Furthermore, integrating the proposed framework and the orthogonal frequency division multiplexing systems integrating non-orthogonal multiple access technologies, i.e., OFDM-NOMA systems, a power allocation problem to realize the transfer from additive white Gaussian noise (AWGN) channels to multi-subcarrier Rayleigh fading channels is formulated. We then design a semantics-similar dual transformation (SSDT) algorithm to derive analytical solutions with low complexity. Simulation results show that the proposed CT-SemCom framework with SSDT algorithm significantly outperforms the existing work w.r.t. channel transferability, e.g., the peak signal-to-noise ratio (PSNR) of image transmission improves by 4.2-7.3 dB under different variances of Rayleigh fading channels.
Semantic communications, aiming at ensuring the successful delivery of the meaning of information, are expected to be one of the potential techniques for the next generation communications. However, the knowledge forming and synchronizing mechanism that enables semantic communication systems to extract and interpret the semantics of information according to the communication intents is still immature. In this paper, we propose a semantic image transmission framework with explicit semantic base (Seb), where Sebs are generated and employed as the knowledge shared between the transmitter and the receiver with flexible granularity. To represent images with Sebs, a novel Seb-based reference image generator is proposed to generate Sebs and then decompose the transmitted images. To further encode/decode the residual information for precise image reconstruction, a Seb-based image encoder/decoder is proposed. The key components of the proposed framework are optimized jointly by end-to-end (E2E) training, where the loss function is dedicated designed to tackle the problem of nondifferentiable operation in Seb-based reference image generator by introducing a gradient approximation mechanism. Extensive experiments show that the proposed framework outperforms state-of-art works by 0.5 - 1.5 dB in peak signal-to-noise ratio (PSNR) w.r.t. different signal-to-noise ratio (SNR).
Semantic communications are expected to be an innovative solution to the emerging intelligent applications in the era of connected intelligence. In this paper, a novel scalable multitask semantic communication system with feature importance ranking (SMSC-FIR) is explored. Firstly, the multi-task correlations are investigated by a joint semantic encoder to extract relevant features. Then, a new scalable coding method is proposed based on feature importance ranking, which dynamically adjusts the coding rate and guarantees that important features for semantic tasks are transmitted with higher priority. Simulation results show that SMSC-FIR achieves performance gain w.r.t. individual intelligent tasks, especially in the low SNR regime.
Internet of Vehicles (IoV) is expected to become the central infrastructure to provide advanced services to connected vehicles and users for higher transportation efficiency and security. A variety of emerging applications/services bring explosively growing demands for mobile data traffic between connected vehicles and roadside units (RSU), imposing the significant challenge of spectrum scarcity to IoV. In this paper, we propose a cooperative semantic-aware architecture to convey essential semantics from collaborated users to servers for lowering the data traffic. In contrast to current solutions that are mainly based on piling up highly complex signal processing techniques and multiple access capabilities in terms of syntactic communications, this paper puts forth the idea of semantic-aware content delivery in IoV. Specifically, the successful transmission of essential semantics of the source data is pursued, rather than the accurate reception of symbols regardless of its meaning as in conventional syntactic communications. To assess the benefits of the proposed architecture, we provide a case study of the image retrieval task for vehicles in intelligent transportation systems. Simulation results demonstrate that the proposed architecture outperforms the existing solutions with fewer radio resources, especially in a low signal-to-noise-ratio (SNR) regime, which can shed light on the potential of the proposed architecture in extending the applications in extreme environments.
This article investigates the cache-enabling unmanned aerial vehicle (UAV) cellular networks with massive access capability supported by non-orthogonal multiple access (NOMA). The delivery of a large volume of multimedia contents for ground users is assisted by a mobile UAV base station, which caches some popular contents for wireless backhaul link traffic offloading. In cache-enabling UAV NOMA networks, the caching placement of content caching phase and radio resource allocation of content delivery phase are crucial for network performance. To cope with the dynamic UAV locations and content requests in practical scenarios, we formulate the long-term caching placement and resource allocation optimization problem for content delivery delay minimization as a Markov decision process (MDP). The UAV acts as an agent to take actions for caching placement and resource allocation, which includes the user scheduling of content requests and the power allocation of NOMA users. In order to tackle the MDP, we propose a Q-learning based caching placement and resource allocation algorithm, where the UAV learns and selects action with \emph{soft ${\varepsilon}$-greedy} strategy to search for the optimal match between actions and states. Since the action-state table size of Q-learning grows with the number of states in the dynamic networks, we propose a function approximation based algorithm with combination of stochastic gradient descent and deep neural networks, which is suitable for large-scale networks. Finally, the numerical results show that the proposed algorithms provide considerable performance compared to benchmark algorithms, and obtain a trade-off between network performance and calculation complexity.
Graph Convolutional Networks (GCNs) and their variants have received significant attention and achieved start-of-the-art performances on various recommendation tasks. However, many existing GCN models tend to perform recursive aggregations among all related nodes, which arises severe computational burden. Moreover, they favor multi-layer architectures in conjunction with complicated modeling techniques. Though effective, the excessive amount of model parameters largely hinder their applications in real-world recommender systems. To this end, in this paper, we propose the single-layer GCN model which is able to achieve superior performance along with remarkably less complexity compared with existing models. Our main contribution is three-fold. First, we propose a principled similarity metric named distribution-aware similarity (DA similarity), which can guide the neighbor sampling process and evaluate the quality of the input graph explicitly. We also prove that DA similarity has a positive correlation with the final performance, through both theoretical analysis and empirical simulations. Second, we propose a simplified GCN architecture which employs a single GCN layer to aggregate information from the neighbors filtered by DA similarity and then generates the node representations. Moreover, the aggregation step is a parameter-free operation, such that it can be done in a pre-processing manner to further reduce red the training and inference costs. Third, we conduct extensive experiments on four datasets. The results verify that the proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training, in terms of the recommendation performance.
Millimeter wave (mmWave) communications can potentially meet the high data-rate requirements of unmanned aerial vehicle (UAV) networks. However, as the prerequisite of mmWave communications, the narrow directional beam tracking is very challenging because of the three-dimensional (3D) mobility and attitude variation of UAVs. Aiming to address the beam tracking difficulties, we propose to integrate the conformal array (CA) with the surface of each UAV, which enables the full spatial coverage and the agile beam tracking in highly dynamic UAV mmWave networks. More specifically, the key contributions of our work are three-fold. 1) A new mmWave beam tracking framework is established for the CA-enabled UAV mmWave network. 2) A specialized hierarchical codebook is constructed to drive the directional radiating element (DRE)-covered cylindrical conformal array (CCA), which contains both the angular beam pattern and the subarray pattern to fully utilize the potential of the CA. 3) A codebook-based multiuser beam tracking scheme is proposed, where the Gaussian process machine learning enabled UAV position/attitude predication is developed to improve the beam tracking efficiency in conjunction with the tracking-error aware adaptive beamwidth control. Simulation results validate the effectiveness of the proposed codebook-based beam tracking scheme in the CA-enabled UAV mmWave network, and demonstrate the advantages of CA over the conventional planner array in terms of spectrum efficiency and outage probability in the highly dynamic scenarios.
With reinforcement learning, an agent could learn complex behaviors from high-level abstractions of the task. However, exploration and reward shaping remained challenging for existing methods, especially in scenarios where the extrinsic feedback was sparse. Expert demonstrations have been investigated to solve these difficulties, but a tremendous number of high-quality demonstrations were usually required. In this work, an integrated policy gradient algorithm was proposed to boost exploration and facilitate intrinsic reward learning from only limited number of demonstrations. We achieved this by reformulating the original reward function with two additional terms, where the first term measured the Jensen-Shannon divergence between current policy and the expert, and the second term estimated the agent's uncertainty about the environment. The presented algorithm was evaluated on a range of simulated tasks with sparse extrinsic reward signals where only one single demonstrated trajectory was provided to each task, superior exploration efficiency and high average return were demonstrated in all tasks. Furthermore, it was found that the agent could imitate the expert's behavior and meanwhile sustain high return.
The marriage of wireless big data and machine learning techniques revolutionizes the wireless system by the data-driven philosophy. However, the ever exploding data volume and model complexity will limit centralized solutions to learn and respond within a reasonable time. Therefore, scalability becomes a critical issue to be solved. In this article, we aim to provide a systematic discussion on the building blocks of scalable data-driven wireless networks. On one hand, we discuss the forward-looking architecture and computing framework of scalable data-driven systems from a global perspective. On the other hand, we discuss the learning algorithms and model training strategies performed at each individual node from a local perspective. We also highlight several promising research directions in the context of scalable data-driven wireless communications to inspire future research.