Outdoor-to-indoor communications in millimeter-wave (mmWave) cellular networks have been one challenging research problem due to the severe attenuation and the high penetration loss caused by the propagation characteristics of mmWave signals. We propose a viable solution to implement the outdoor-to-indoor mmWave communication system with the aid of an active intelligent transmitting surface (active-ITS), where the active-ITS allows the incoming signal from an outdoor base station (BS) to pass through the surface and be received by the indoor user-equipments (UEs) after shifting its phase and magnifying its amplitude. Then, the problem of joint precoding of the BS and active-ITS is investigated to maximize the weighted sum-rate (WSR) of the communication system. An efficient block coordinate descent (BCD) based algorithm is developed to solve it with the suboptimal solutions in nearly closed-forms. In addition, to reduce the size and hardware cost of an active-ITS, we provide a block-amplifying architecture to partially remove the circuit components for power-amplifying, where multiple transmissive-type elements (TEs) in each block share a same power amplifier. Simulations indicate that active-ITS has the potential of achieving a given performance with much fewer TEs compared to the passive-ITS under the same total system power consumption, which makes it suitable for application to the size-limited and aesthetic-needed scenario, and the inevitable performance degradation caused by the block-amplifying architecture is acceptable.
For many applications envisioned for the Internet of Things (IoT), it is expected that the sensors will have very low costs and zero power, which can be satisfied by meta-material sensor based IoT, i.e., meta-IoT. As their constituent meta-materials can reflect wireless signals with environment-sensitive reflection coefficients, meta-IoT sensors can achieve simultaneous sensing and transmission without any active modulation. However, to maximize the sensing accuracy, the structures of meta-IoT sensors need to be optimized considering their joint influence on sensing and transmission, which is challenging due to the high computational complexity in evaluating the influence, especially given a large number of sensors. In this paper, we propose a joint sensing and transmission design method for meta-IoT systems with a large number of meta-IoT sensors, which can efficiently optimize the sensing accuracy of the system. Specifically, a computationally efficient received signal model is established to evaluate the joint influence of meta-material structure on sensing and transmission. Then, a sensing algorithm based on deep unsupervised learning is designed to obtain accurate sensing results in a robust manner. Experiments with a prototype verify that the system has a higher sensitivity and a longer transmission range compared to existing designs, and can sense environmental anomalies correctly within 2 meters.
Automatic modulation classification (AMC) using the Deep Neural Network (DNN) approach outperforms the traditional classification techniques, even in the presence of challenging wireless channel environments. However, the adversarial attacks cause the loss of accuracy for the DNN-based AMC by injecting a well-designed perturbation to the wireless channels. In this paper, we propose a novel generative adversarial network (GAN)-based countermeasure approach to safeguard the DNN-based AMC systems against adversarial attack examples. GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier. Specifically, we have shown the resiliency of our proposed defense GAN against the Fast-Gradient Sign method (FGSM) algorithm as one of the most potent kinds of attack algorithms to craft the perturbed signals. The existing defense-GAN has been designed for image classification and does not work in our case where the above-mentioned communication system is considered. Thus, our proposed countermeasure approach deploys GANs with a mixture of generators to overcome the mode collapsing problem in a typical GAN facing radio signal classification problem. Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
Today very few deep learning-based mobile augmented reality (MAR) applications are applied in mobile devices because they are significantly energy-guzzling. In this paper, we design an edge-based energy-aware MAR system that enables MAR devices to dynamically change their configurations, such as CPU frequency, computation model size, and image offloading frequency based on user preferences, camera sampling rates, and available radio resources. Our proposed dynamic MAR configuration adaptations can minimize the per frame energy consumption of multiple MAR clients without degrading their preferred MAR performance metrics, such as latency and detection accuracy. To thoroughly analyze the interactions among MAR configurations, user preferences, camera sampling rate, and energy consumption, we propose, to the best of our knowledge, the first comprehensive analytical energy model for MAR devices. Based on the proposed analytical model, we design a LEAF optimization algorithm to guide the MAR configuration adaptation and server radio resource allocation. An image offloading frequency orchestrator, coordinating with the LEAF, is developed to adaptively regulate the edge-based object detection invocations and to further improve the energy efficiency of MAR devices. Extensive evaluations are conducted to validate the performance of the proposed analytical model and algorithms.
Reconfigurable intelligent surface (RIS) has received increasing attention due to its capability of extending cell coverage by reflecting signals toward receivers. This paper considers a RIS-assisted high-speed train (HST) communication system to improve the coverage probability. We derive the closed-form expression of coverage probability. Moreover, we analyze impacts of some key system parameters, including transmission power, signal-to-noise ratio threshold, and horizontal distance between base station and RIS. Simulation results verify the efficiency of RIS-assisted HST communications in terms of coverage probability.
Owing to the rapid development of sensor technology, hyperspectral (HS) remote sensing (RS) imaging has provided a significant amount of spatial and spectral information for the observation and analysis of the Earth's surface at a distance of data acquisition devices, such as aircraft, spacecraft, and satellite. The recent advancement and even revolution of the HS RS technique offer opportunities to realize the full potential of various applications, while confronting new challenges for efficiently processing and analyzing the enormous HS acquisition data. Due to the maintenance of the 3-D HS inherent structure, tensor decomposition has aroused widespread concern and research in HS data processing tasks over the past decades. In this article, we aim at presenting a comprehensive overview of tensor decomposition, specifically contextualizing the five broad topics in HS data processing, and they are HS restoration, compressed sensing, anomaly detection, super-resolution, and spectral unmixing. For each topic, we elaborate on the remarkable achievements of tensor decomposition models for HS RS with a pivotal description of the existing methodologies and a representative exhibition on the experimental results. As a result, the remaining challenges of the follow-up research directions are outlined and discussed from the perspective of the real HS RS practices and tensor decomposition merged with advanced priors and even with deep neural networks. This article summarizes different tensor decomposition-based HS data processing methods and categorizes them into different classes from simple adoptions to complex combinations with other priors for the algorithm beginners. We also expect this survey can provide new investigations and development trends for the experienced researchers who understand tensor decomposition and HS RS to some extent.
As a widely used localization and sensing technique, radars will play an important role in future wireless networks. However, the wireless channels between the radar and the targets are passively adopted by traditional radars, which limits the performance of target detection. To address this issue, we propose to use the reconfigurable intelligent surface (RIS) to improve the detection accuracy of radar systems due to its capability to customize channel conditions by adjusting its phase shifts, which is referred to as MetaRadar. In such a system, it is challenging to jointly optimize both radar waveforms and RIS phase shifts in order to improve the multi-target detection performance. To tackle this challenge, we design a waveform and phase shift optimization (WPSO) algorithm to effectively solve the multi-target detection problem, and also analyze the performance of the proposed MetaRadar scheme theoretically. Simulation results show that the detection performance of the MetaRadar scheme is significantly better than that of the traditional radar schemes.
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.
With the explosive increment of computation requirements, the multi-access edge computing (MEC) paradigm appears as an effective mechanism. Besides, as for the Internet of Things (IoT) in disasters or remote areas requiring MEC services, unmanned aerial vehicles (UAVs) and high altitude platforms (HAPs) are available to provide aerial computing services for these IoT devices. In this paper, we develop the hierarchical aerial computing framework composed of HAPs and UAVs, to provide MEC services for various IoT applications. In particular, the problem is formulated to maximize the total IoT data computed by the aerial MEC platforms, restricted by the delay requirement of IoT and multiple resource constraints of UAVs and HAPs, which is an integer programming problem and intractable to solve. Due to the prohibitive complexity of exhaustive search, we handle the problem by presenting the matching game theory based algorithm to deal with the offloading decisions from IoT devices to UAVs, as well as a heuristic algorithm for the offloading decisions between UAVs and HAPs. The external effect affected by interplay of different IoT devices in the matching is tackled by the externality elimination mechanism. Besides, an adjustment algorithm is also proposed to make the best of aerial resources. The complexity of proposed algorithms is analyzed and extensive simulation results verify the efficiency of the proposed algorithms, and the system performances are also analyzed by the numerical results.
Next-generation networks need to meet ubiquitous and high data-rate demand. Therefore, this paper considers the throughput and trajectory optimization of terahertz (THz)-enabled unmanned aerial vehicles (UAVs) in the sixth-generation (6G) communication networks. In the considered scenario, multiple UAVs must provide on-demand terabits per second (TB/s) services to an urban area along with existing terrestrial networks. However, THz-empowered UAVs pose some new constraints, e.g., dynamic THz-channel conditions for ground users (GUs) association and UAV trajectory optimization to fulfill GU's throughput demands. Thus, a framework is proposed to address these challenges, where a joint UAVs-GUs association, transmit power, and the trajectory optimization problem is studied. The formulated problem is mixed-integer non-linear programming (MINLP), which is NP-hard to solve. Consequently, an iterative algorithm is proposed to solve three sub-problems iteratively, i.e., UAVs-GUs association, transmit power, and trajectory optimization. Simulation results demonstrate that the proposed algorithm increased the throughput by up to 10%, 68.9%, and 69.1% respectively compared to baseline algorithms.