High altitude platforms (HAPs)-aided terrestrial-aerial communication technology based on free-space optical (FSO) and Terahertz (THz) feeder links has been attracting notable interest recently due to its great potential in reaching a higher data rate and connectivity. Nonetheless, the presence of harsh vertical propagation environments and potential aerial eavesdroppers are two of the main challenges limiting the reliability and security of such a technology. In this work, a secrecy-enhancing scheme for HAP-aided ground-aerial communication is proposed. The considered network consists of HAP-assisted communication between a ground station and a legitimate user under the threat of an aerial and ground eavesdropper. Thus, the proposed scheme leverages (i) HAP diversity by exploiting the presence of multiple flying HAPs and (ii) the use of a hybrid FSO/THz transmission scheme to offer better resilience against eavesdropping attacks. An analytical secrecy outage probability (SOP) expression is derived for the scheme in consideration. Results manifest the notable gain in security of the proposed scheme with respect to both (i) the single-HAP and (ii) THz feeder-based benchmark ones, where the proposed scheme's SOP is decreased by four orders of magnitude using $4$ HAPs with respect to the first benchmark scheme, while a $5$-dB secrecy gain is manifested with respect to the second benchmark one.
This paper explores a novel Neural Network (NN) architecture suitable for Beamformed Fingerprint (BFF) localization in a millimeter-wave (mmWave) multiple-input multiple-output (MIMO) outdoor system. The mmWave frequency bands have attracted significant attention due to their precise timing measurements, making them appealing for applications demanding accurate device localization and trajectory estimation. The proposed NN architecture captures BFF sequences originating from various user paths, and through the application of learning mechanisms, subsequently estimates these trajectories. Specifically, we propose a method for trajectory estimation, employing a transformer network (TN) that relies on attention mechanisms. This TN-based approach estimates wireless device trajectories using BFF sequences recorded within a mmWave MIMO outdoor system. To validate the efficacy of our proposed approach, numerical experiments are conducted using a comprehensive dataset of radio measurements in an outdoor setting, complemented with ray tracing to simulate wireless signal propagation at 28 GHz. The results illustrate that the TN-based trajectory estimator outperforms other methods from the existing literature and possesses the ability to generalize effectively to new trajectories outside the training dataset.
The colossal evolution of wireless communication technologies over the past few years has driven increased interest in its integration in a variety of less-explored environments, such as the underwater medium. In this magazine paper, we present a comprehensive discussion on a novel concept of routing protocol known as cross-media routing, incorporating the marine and aerial interfaces. In this regard, we discuss the limitation of single-media routing and advocate the need for cross-media routing along with the current status of research development in this direction. To this end, we also propose a novel cross-media routing protocol known as bubble routing for autonomous marine systems where different sets of AUVs, USVs, and airborne nodes are considered for the routing problem. We evaluate the performance of the proposed routing protocol by using the two key performance metrics, i.e., packet delivery ratio (PDR) and end-to-end delay. Moreover, we delve into the challenges encountered in cross-media routing, unveiling exciting opportunities for future research and innovation. As wireless communication expands its horizons to encompass the underwater and aerial domains, understanding and addressing these challenges will pave the way for enhanced cross-media communication and exploration.
Research in underwater communication is rapidly becoming attractive due to its various modern applications. An efficient mechanism to secure such communication is via physical layer security. In this paper, we propose a novel physical layer authentication (PLA) mechanism in underwater acoustic communication networks where we exploit the position/location of the transmitter nodes to achieve authentication. We perform transmitter position estimation from the received signals at reference nodes deployed at fixed positions in a predefined underwater region. We use time of arrival (ToA) estimation and derive the distribution of inherent uncertainty in the estimation. Next, we perform binary hypothesis testing on the estimated position to decide whether the transmitter node is legitimate or malicious. We then provide closed-form expressions of false alarm rate and missed detection rate resulted from binary hypothesis testing. We validate our proposal via simulation results, which demonstrate errors' behavior against the link quality, malicious node location, and receiver operating characteristic (ROC) curves. We also compare our results with the performance of previously proposed fingerprint mechanisms for PLA in underwater acoustic communication networks, for which we show a clear advantage of using the position as a fingerprint in PLA.
The demand for high data rates is rapidly increasing as the interest in Magnetic Induction (MI) communication-based underwater applications grow. However, the data rate in MI is limited by the use of low operational frequency in generating a quasi-static magnetic field. In this paper, we propose the use of full-duplex (FD) MI communication to efficiently utilize the available bandwidth and instantly double the data rate. We propose a two-dimensional transceiver architecture to achieve full-duplex communication by exploiting the directional nature of magnetic fields. We further evaluate the proposed end-to-end FD MI communication against self-interference (SI), its impact on communication distance, and robustness in view of orientation sensitivity. Finally, we conclude by discussing typical challenges in the realization of FD MI communication and highlight a few potential future research directions.
Food recognition is an important task for a variety of applications, including managing health conditions and assisting visually impaired people. Several food recognition studies have focused on generic types of food or specific cuisines, however, food recognition with respect to Middle Eastern cuisines has remained unexplored. Therefore, in this paper we focus on developing a mobile friendly, Middle Eastern cuisine focused food recognition application for assisted living purposes. In order to enable a low-latency, high-accuracy food classification system, we opted to utilize the Mobilenet-v2 deep learning model. As some of the foods are more popular than the others, the number of samples per class in the used Middle Eastern food dataset is relatively imbalanced. To compensate for this problem, data augmentation methods are applied on the underrepresented classes. Experimental results show that using Mobilenet-v2 architecture for this task is beneficial in terms of both accuracy and the memory usage. With the model achieving 94% accuracy on 23 food classes, the developed mobile application has potential to serve the visually impaired in automatic food recognition via images.
In this paper, we investigate for the first time the dynamic power allocation and decoding order at the base station (BS) of two-user uplink (UL) cooperative non-orthogonal multiple access (C-NOMA)-based cellular networks. In doing so, we formulate a joint optimization problem aiming at maximizing the minimum user achievable rate, which is a non-convex optimization problem and hard to be directly solved. To tackle this issue, an iterative algorithm based on successive convex approximation (SCA) is proposed. The numerical results reveal that the proposed scheme provides superior performance in comparison with the traditional UL NOMA. In addition, we demonstrated that in UL C-NOMA, decoding the far NOMA user first at the BS provides the best performance.
LoRa wireless networks are considered as a key enabling technology for next generation internet of things (IoT) systems. New IoT deployments (e.g., smart city scenarios) can have thousands of devices per square kilometer leading to huge amount of power consumption to provide connectivity. In this paper, we investigate green LoRa wireless networks powered by a hybrid of the grid and renewable energy sources, which can benefit from harvested energy while dealing with the intermittent supply. This paper proposes resource management schemes of the limited number of channels and spreading factors (SFs) with the objective of improving the LoRa gateway energy efficiency. First, the problem of grid power consumption minimization while satisfying the system's quality of service demands is formulated. Specifically, both scenarios the uncorrelated and time-correlated channels are investigated. The optimal resource management problem is solved by decoupling the formulated problem into two sub-problems: channel and SF assignment problem and energy management problem. Since the optimal solution is obtained with high complexity, online resource management heuristic algorithms that minimize the grid energy consumption are proposed. Finally, taking into account the channel and energy correlation, adaptable resource management schemes based on Reinforcement Learning (RL), are developed. Simulations results show that the proposed resource management schemes offer efficient use of renewable energy in LoRa wireless networks.
The prevalence of Diabetes mellitus (DM) in the Middle East is exceptionally high as compared to the rest of the world. In fact, the prevalence of diabetes in the Middle East is 17-20%, which is well above the global average of 8-9%. Research has shown that food intake has strong connections with the blood glucose levels of a patient. In this regard, there is a need to build automatic tools to monitor the blood glucose levels of diabetics and their daily food intake. This paper presents an automatic way of tracking continuous glucose and food intake of diabetics using off-the-shelf sensors and machine learning, respectively. Our system not only helps diabetics to track their daily food intake but also assists doctors to analyze the impact of the food in-take on blood glucose in real-time. For food recognition, we collected a large-scale Middle-Eastern food dataset and proposed a fusion-based framework incorporating several existing pre-trained deep models for Middle-Eastern food recognition.