Backhaul traffic congestion caused by the video traffic of a few popular files can be alleviated by storing the to-be-requested content at various levels in wireless video caching networks. Typically, content service providers (CSPs) own the content, and the users request their preferred content from the CSPs using their (wireless) internet service providers (ISPs). As these parties do not reveal their private information and business secrets, traditional techniques may not be readily used to predict the dynamic changes in users' future demands. Motivated by this, we propose a novel resource-aware hierarchical federated learning (RawHFL) solution for predicting user's future content requests. A practical data acquisition technique is used that allows the user to update its local training dataset based on its requested content. Besides, since networking and other computational resources are limited, considering that only a subset of the users participate in the model training, we derive the convergence bound of the proposed algorithm. Based on this bound, we minimize a weighted utility function for jointly configuring the controllable parameters to train the RawHFL energy efficiently under practical resource constraints. Our extensive simulation results validate the proposed algorithm's superiority, in terms of test accuracy and energy cost, over existing baselines.
Hybrid beamforming is an attractive solution to build cost-effective and energy-efficient transceivers for millimeter-wave and terahertz systems. However, conventional hybrid beamforming techniques rely on analog components that generate a frequency flat response such as phase-shifters and switches, which limits the flexibility of the achievable beam patterns. As a novel alternative, this paper proposes a new class of hybrid beamforming called Joint phase-time arrays (JPTA), that additionally use true-time delay elements in the analog beamforming to create frequency-dependent analog beams. Using as an example two important frequency-dependent beam behaviors, the numerous benefits of such flexibility are exemplified. Subsequently, the JPTA beamformer design problem to generate any desired beam behavior is formulated and near-optimal algorithms to the problem are proposed. Simulations show that the proposed algorithms can outperform heuristics solutions for JPTA beamformer update. Furthermore, it is shown that JPTA can achieve the two exemplified beam behaviors with one radio-frequency chain, while conventional hybrid beamforming requires the radio-frequency chains to scale with the number of antennas to achieve similar performance. Finally, a wide range of problems to further tap into the potential of JPTA are also listed as future directions.
Large-scale channel prediction, i.e., estimation of the pathloss from geographical/morphological/building maps, is an essential component of wireless network planning. Ray tracing (RT)-based methods have been widely used for many years, but they require significant computational effort that may become prohibitive with the increased network densification and/or use of higher frequencies in B5G/6G systems. In this paper, we propose a data-driven, model-free pathloss map prediction (PMP) method, called PMNet. PMNet uses a supervised learning approach: it is trained on a limited amount of RT (or channel measurement) data and map data. Once trained, PMNet can predict pathloss over location with high accuracy (an RMSE level of $10^{-2}$) in a few milliseconds. We further extend PMNet by employing transfer learning (TL). TL allows PMNet to learn a new network scenario quickly (x5.6 faster training) and efficiently (using x4.5 less data) by transferring knowledge from a pre-trained model, while retaining accuracy. Our results demonstrate that PMNet is a scalable and generalizable ML-based PMP method, showing its potential to be used in several network optimization applications.
This study presents a novel deep reinforcement learning (DRL)-based handover (HO) protocol, called DHO, specifically designed to address the persistent challenge of long propagation delays in low-Earth orbit (LEO) satellite networks' HO procedures. DHO skips the Measurement Report (MR) in the HO procedure by leveraging its predictive capabilities after being trained with a pre-determined LEO satellite orbital pattern. This simplification eliminates the propagation delay incurred during the MR phase, while still providing effective HO decisions. The proposed DHO outperforms the legacy HO protocol across diverse network conditions in terms of access delay, collision rate, and handover success rate, demonstrating the practical applicability of DHO in real-world networks. Furthermore, the study examines the trade-off between access delay and collision rate and also evaluates the training performance and convergence of DHO using various DRL algorithms.
Localization in GPS-denied outdoor locations, such as street canyons in an urban or metropolitan environment, has many applications. Machine Learning (ML) is widely used to tackle this critical problem. One challenge lies in the mixture of line-of-sight (LOS), obstructed LOS (OLOS), and non-LOS (NLOS) conditions. In this paper, we consider a semantic localization that treats these three propagation conditions as the ''semantic objects", and aims to determine them together with the actual localization, and show that this increases accuracy and robustness. Furthermore, the propagation conditions are highly dynamic, since obstruction by cars or trucks can change the channel state information (CSI) at a fixed location over time. We therefore consider the blockage by such dynamic objects as another semantic state. Based on these considerations, we formulate the semantic localization with a joint task (coordinates regression and semantics classification) learning problem. Another problem created by the dynamics is the fact that each location may be characterized by a number of different CSIs. To avoid the need for excessive amount of labeled training data, we propose a multi-task deep domain adaptation (DA) based localization technique, training neural networks with a limited number of labeled samples and numerous unlabeled ones. Besides, we introduce novel scenario adaptive learning strategies to ensure efficient representation learning and successful knowledge transfer. Finally, we use Bayesian theory for uncertainty modeling of the importance weights in each task, reducing the need for time-consuming parameter finetuning; furthermore, with some mild assumptions, we derive the related log-likelihood for the joint task and present the deep homoscedastic DA based localization method.
The geometry-based stochastic channel models (GSCM), which can describe realistic channel impulse responses, often rely on the existence of both {\em local} and {\em far} scatterers. However, their visibility from both the base station (BS) and mobile station (MS) depends on their relative heights and positions. For example, the condition of visibility of a scatterer from the perspective of a BS is different from that of an MS and depends on the height of the scatterer. To capture this, we propose a novel GSCM where each scatterer has dual disk visibility regions (VRs) centered on itself for both BS and MS, with their radii being our model parameters. Our model consists of {\em short} and {\em tall} scatterers, which are both modeled using independent inhomogeneous Poisson point processes (IPPPs) having distinct dual VRs. We also introduce a probability parameter to account for the varying visibility of tall scatterers from different MSs, effectively emulating their noncontiguous VRs. Using stochastic geometry, we derive the probability mass function (PMF) of the number of multipath components (MPCs), the marginal and joint distance distributions for an active scatterer, the mean time of arrival (ToA), and the mean received power through non-line-of-sight (NLoS) paths for our proposed model. By selecting appropriate model parameters, the propagation characteristics of our GSCM are demonstrated to closely emulate those of the COST-259 model.
This paper introduces a novel line-of-sight (LoS) $\beta-\gamma$ terahertz (THz) channel model that closely mirrors physical reality by considering radiation trapping. Our channel model provides an exhaustive modeling of the physical phenomena including the amount of re-radiation available at the receiver, parametrized by $\beta$, and the balance between scattering and noise contributions, parametrized by $\gamma$, respectively. Our findings indicate a nontrivial relationship between average limiting received signal-to-noise ratio (SNR) and distance emphasizing the significance of $\gamma$ in THz system design. We further propose new maximum likelihood (ML) thresholds for pulse amplitude modulation (PAM) and quadrature amplitude modulation (QAM) schemes, resulting in analytical symbol error rate (SER) expressions that account for different noise variances across constellation points. The results confirm that the analytical SER closely matches the true simulated SER when using an optimal detector. As expected, under maximum molecular re-radiation, the true SER is shown to be lower than that produced by a suboptimal detector that assumes equal noise variances.
Indoor localization is a challenging task. There is no robust and almost-universal approach, in contrast to outdoor environments where GPS is dominant. Recently, machine learning (ML) has emerged as the most promising approach for achieving accurate indoor localization, yet its main challenge is the requirement for large datasets to train the neural networks. The data collection procedure is costly and laborious as the procedure requires extensive measurements and labeling processes for different indoor environments. The situation can be improved by Data Augmentation (DA), which is a general framework to enlarge the datasets for ML, making ML systems more robust and increases their generalization capabilities. In this paper, we propose two simple yet surprisingly effective DA algorithms for channel state information (CSI) based indoor localization motivated by physical considerations. We show that the required number of measurements for a given accuracy requirement may be decreased by an order of magnitude. Specifically, we demonstrate the algorithms' effectiveness by experiments conducted with a measured indoor WiFi measurement dataset: as little as 10% of the original dataset size is enough to get the same performance of the original dataset. We also showed that, if we further augment the dataset with proposed techniques we get better test accuracy more than three-fold.
Pathloss prediction is an essential component of wireless network planning. While ray-tracing based methods have been successfully used for many years, they require significant computational effort that may become prohibitive with the increased network densification and/or use of higher frequencies in 5G/B5G (beyond 5 G) systems. In this paper, we propose and evaluate a data-driven and model-free pathloss prediction method, dubbed PMNet. This method uses a supervised learning approach: training a neural network (NN) with a limited amount of ray tracing (or channel measurement) data and map data and then predicting the pathloss over location with no ray tracing data with a high level of accuracy. Our proposed pathloss map prediction-oriented NN architecture, which is empowered by state-of-the-art computer vision techniques, outperforms other architectures that have been previously proposed (e.g., UNet, RadioUNet) in terms of accuracy while showing generalization capability. Moreover, PMNet trained on a 4-fold smaller dataset surpasses the other baselines (trained on a 4-fold larger dataset), corroborating the potential of PMNet.
Terahertz (THz) communication signals are susceptible to severe degradation because of the molecular interaction with the atmosphere in the form of subsequent absorption and re-radiation. Recently, reconfigurable intelligent surface (RIS) has emerged as a potential technology to assist in THz communications by boosting signal power or providing virtual line-of-sight paths. However, the re-radiated energy has either been modeled as a non-line-of-sight scattering component or as additive Gaussian noise in the literature. Since the precise characterization is still a work in progress, this paper presents the first comparative investigation of the performance of an RIS-aided THz system under these two extreme re-radiation models. In particular, we first develop a novel parametric channel model that encompasses both models of the re-radiation through a simple parameter change, and then utilize that to design a robust block-coordinate descent (BCD) algorithmic framework which maximizes a lower bound on channel capacity considering imperfect channel state information. In this framework, the original problem is split into two sub-problems: a) receive beamformer optimization, and b) RIS phase-shift optimization. We also analytically demonstrate the limited interference suppression capability of a passive RIS. Our numerical results also demonstrate that slightly better throughput is achieved when the re-radiation manifests as scattering.