Abstract:In the evolving landscape of the Internet of Things (IoT), integrating cognitive radio (CR) has become a practical solution to address the challenge of spectrum scarcity, leading to the development of cognitive IoT (CIoT). However, the vulnerability of radio communications makes radio jamming attacks a key concern in CIoT networks. In this paper, we introduce a novel deep reinforcement learning (DRL) approach designed to optimize throughput and extend network lifetime of an energy-constrained CIoT system under jamming attacks. This DRL framework equips a CIoT device with the autonomy to manage energy harvesting (EH) and data transmission, while also regulating its transmit power to respect spectrum-sharing constraints. We formulate the optimization problem under various constraints, and we model the CIoT device's interactions within the channel as a model-free Markov decision process (MDP). The MDP serves as a foundation to develop a double deep Q-network (DDQN), designed to help the CIoT agent learn the optimal communication policy to navigate challenges such as dynamic channel occupancy, jamming attacks, and channel fading while achieving its goal. Additionally, we introduce a variant of the upper confidence bound (UCB) algorithm, named UCB-IA, which enhances the CIoT network's ability to efficiently navigate jamming attacks within the channel. The proposed DRL algorithm does not rely on prior knowledge and uses locally observable information such as channel occupancy, jamming activity, channel gain, and energy arrival to make decisions. Extensive simulations prove that our proposed DRL algorithm that utilizes the UCB-IA strategy surpasses existing benchmarks, allowing for a more adaptive, energy-efficient, and secure spectrum sharing in CIoT networks.
Abstract:This letter presents a novel deep reinforcement learning (DRL) approach for joint time allocation and power control in a cognitive Internet of Things (CIoT) system with simultaneous wireless information and power transfer (SWIPT). The CIoT transmitter autonomously manages energy harvesting (EH) and transmissions using a learnable time switching factor while optimizing power to enhance throughput and lifetime. The joint optimization is modeled as a Markov decision process under small-scale fading, realistic EH, and interference constraints. We develop a double deep Q-network (DDQN) enhanced with an upper confidence bound. Simulations benchmark our approach, showing superior performance over existing DRL methods.
Abstract:This paper proposes a hierarchical deep reinforcement learning (DRL) framework based on the soft actor-critic (SAC) algorithm for hybrid underlay-overlay cognitive Internet of Things (CIoT) networks with simultaneous wireless information and power transfer (SWIPT)-energy harvesting (EH) and cooperative caching. Unlike prior hierarchical DRL approaches that focus primarily on spectrum access or power control, our work jointly optimizes EH, hybrid access coordination, power allocation, and caching in a unified framework. The joint optimization problem is formulated as a weighted-sum multi-objective task, designed to maximize throughput and cache hit ratio while simultaneously minimizing transmission delay. In the proposed model, CIoT agents jointly optimize EH and data transmission using a learnable time switching (TS) factor. They also coordinate spectrum access under hybrid overlay-underlay paradigms and make power control and cache placement decisions while considering energy, interference, and storage constraints. Specifically, in this work, cooperative caching is used to enable overlay access, while power control is used for underlay access. A novel three-level hierarchical SAC (H-SAC) agent decomposes the mixed discrete-continuous action space into modular subproblems, improving scalability and convergence over flat DRL methods. The high-level policy adjusts the TS factor, the mid-level policy manages spectrum access coordination and cache sharing, and the low-level policy decides transmit power and caching actions for both the CIoT agent and PU content. Simulation results show that the proposed hierarchical SAC approach significantly outperforms benchmark and greedy strategies. It achieves better performance in terms of average sum rate, delay, cache hit ratio, and energy efficiency, even under channel fading and uncertain conditions.
Abstract:In cognitive Internet of Things (CIoT) networks, efficient spectrum sharing is essential to address increasing wireless demands. This paper presents a novel deep reinforcement learning (DRL)-based approach for joint cooperative caching and spectrum access coordination in CIoT networks, enabling the CIoT agents to collaborate with primary users (PUs) by caching PU content and serving their requests, fostering mutual benefits. The proposed DRL framework jointly optimizes caching policy and spectrum access under challenging conditions. Unlike traditional cognitive radio (CR) methods, where CIoT agents vacate the spectrum for PUs, or relaying techniques, which merely support spectrum sharing, caching brings data closer to the edge, reducing latency by minimizing retrieval distance. Simulations demonstrate that our approach outperforms others in lowering latency, increasing CIoT and PU cache hit rates, and enhancing network throughput. This approach redefines spectrum sharing, offering a fresh perspective on CIoT network design and illustrating the potential of DRL-guided caching to highlight the benefits of collaboration over dynamic spectrum access scenarios, elevating CIoT performance under constrained resources.
Abstract:Cognitive radio networks (CRNs) are acknowledged for their ability to tackle the issue of spectrum under-utilization. In the realm of CRNs, this paper investigates the energy efficiency issue and addresses the critical challenge of optimizing system reliability for overlay CRN access mode. Randomly dispersed secondary users (SUs) serving as relays for primary users (PUs) are considered, in which one of these relays is designated to harvest energy through the time switching-energy harvesting (EH) protocol. Moreover, this relay amplifies-and-forwards (AF) the PU's messages and broadcasts them along with its own across cascaded $\kappa$-$\mu$ fading channels. The power splitting protocol is another EH approach utilized by the SU and PU receivers to enhance the amount of energy in their storage devices. In addition, the SU transmitters and the SU receiver are deployed with multiple antennas for reception and apply the maximal ratio combining approach. The outage probability is utilized to assess both networks' reliability. Then, an energy efficiency evaluation is performed to determine the effectiveness of EH on the system. Finally, an optimization problem is provided with the goal of maximizing the data rate of the SUs by optimizing the time switching and the power allocation parameters of the SU relay.
Abstract:This paper presents a reinforcement learning (RL) based approach to improve the physical layer security (PLS) of an underlay cognitive radio network (CRN) over cascaded channels. These channels are utilized in highly mobile networks such as cognitive vehicular networks (CVN). In addition, an eavesdropper aims to intercept the communications between secondary users (SUs). The SU receiver has full-duplex and energy harvesting capabilities to generate jamming signals to confound the eavesdropper and enhance security. Moreover, the SU transmitter extracts energy from ambient radio frequency signals in order to power subsequent transmissions to its intended receiver. To optimize the privacy and reliability of the SUs in a CVN, a deep Q-network (DQN) strategy is utilized where multiple DQN agents are required such that an agent is assigned at each SU transmitter. The objective for the SUs is to determine the optimal transmission power and decide whether to collect energy or transmit messages during each time period in order to maximize their secrecy rate. Thereafter, we propose a DQN approach to maximize the throughput of the SUs while respecting the interference threshold acceptable at the receiver of the primary user. According to our findings, our strategy outperforms two other baseline strategies in terms of security and reliability.
Abstract:This paper explores the application of a federated learning-based multi-agent reinforcement learning (MARL) strategy to enhance physical-layer security (PLS) in a multi-cellular network within the context of beyond 5G networks. At each cell, a base station (BS) operates as a deep reinforcement learning (DRL) agent that interacts with the surrounding environment to maximize the secrecy rate of legitimate users in the presence of an eavesdropper. This eavesdropper attempts to intercept the confidential information shared between the BS and its authorized users. The DRL agents are deemed to be federated since they only share their network parameters with a central server and not the private data of their legitimate users. Two DRL approaches, deep Q-network (DQN) and Reinforce deep policy gradient (RDPG), are explored and compared. The results demonstrate that RDPG converges more rapidly than DQN. In addition, we demonstrate that the proposed method outperforms the distributed DRL approach. Furthermore, the outcomes illustrate the trade-off between security and complexity.
Abstract:In this paper, a reinforcement learning technique is employed to maximize the performance of a cognitive radio network (CRN). In the presence of primary users (PUs), it is presumed that two secondary users (SUs) access the licensed band within underlay mode. In addition, the SU transmitter is assumed to be an energy-constrained device that requires harvesting energy in order to transmit signals to their intended destination. Therefore, we propose that there are two main sources of energy; the interference of PUs' transmissions and ambient radio frequency (RF) sources. The SU will select whether to gather energy from PUs or only from ambient sources based on a predetermined threshold. The process of energy harvesting from the PUs' messages is accomplished via the time switching approach. In addition, based on a deep Q-network (DQN) approach, the SU transmitter determines whether to collect energy or transmit messages during each time slot as well as selects the suitable transmission power in order to maximize its average data rate. Our approach outperforms a baseline strategy and converges, as shown by our findings.




Abstract:In this paper, we propose a non-orthogonal multiple access (NOMA)-based communication framework that allows machine type devices (MTDs) to access the network while avoiding congestion. The proposed technique is a 2-step mechanism that first employs fast uplink grant to schedule the devices without sending a request to the base station (BS). Secondly, NOMA pairing is employed in a distributed manner to reduce signaling overhead. Due to the limited capability of information gathering at the BS in massive scenarios, learning techniques are best fit for such problems. Therefore, multi-arm bandit learning is adopted to schedule the fast grant MTDs. Then, constrained random NOMA pairing is proposed that assists in decoupling the two main challenges of fast uplink grant schemes namely, active set prediction and optimal scheduling. Using NOMA, we were able to significantly reduce the resource wastage due to prediction errors. Additionally, the results show that the proposed scheme can easily attain the impractical optimal OMA performance, in terms of the achievable rewards, at an affordable complexity.