Abstract:In the evolving landscape of the Internet of Things (IoT), integrating cognitive radio (CR) has become a practical solution to address the challenge of spectrum scarcity, leading to the development of cognitive IoT (CIoT). However, the vulnerability of radio communications makes radio jamming attacks a key concern in CIoT networks. In this paper, we introduce a novel deep reinforcement learning (DRL) approach designed to optimize throughput and extend network lifetime of an energy-constrained CIoT system under jamming attacks. This DRL framework equips a CIoT device with the autonomy to manage energy harvesting (EH) and data transmission, while also regulating its transmit power to respect spectrum-sharing constraints. We formulate the optimization problem under various constraints, and we model the CIoT device's interactions within the channel as a model-free Markov decision process (MDP). The MDP serves as a foundation to develop a double deep Q-network (DDQN), designed to help the CIoT agent learn the optimal communication policy to navigate challenges such as dynamic channel occupancy, jamming attacks, and channel fading while achieving its goal. Additionally, we introduce a variant of the upper confidence bound (UCB) algorithm, named UCB-IA, which enhances the CIoT network's ability to efficiently navigate jamming attacks within the channel. The proposed DRL algorithm does not rely on prior knowledge and uses locally observable information such as channel occupancy, jamming activity, channel gain, and energy arrival to make decisions. Extensive simulations prove that our proposed DRL algorithm that utilizes the UCB-IA strategy surpasses existing benchmarks, allowing for a more adaptive, energy-efficient, and secure spectrum sharing in CIoT networks.
Abstract:This letter presents a novel deep reinforcement learning (DRL) approach for joint time allocation and power control in a cognitive Internet of Things (CIoT) system with simultaneous wireless information and power transfer (SWIPT). The CIoT transmitter autonomously manages energy harvesting (EH) and transmissions using a learnable time switching factor while optimizing power to enhance throughput and lifetime. The joint optimization is modeled as a Markov decision process under small-scale fading, realistic EH, and interference constraints. We develop a double deep Q-network (DDQN) enhanced with an upper confidence bound. Simulations benchmark our approach, showing superior performance over existing DRL methods.
Abstract:This paper proposes a hierarchical deep reinforcement learning (DRL) framework based on the soft actor-critic (SAC) algorithm for hybrid underlay-overlay cognitive Internet of Things (CIoT) networks with simultaneous wireless information and power transfer (SWIPT)-energy harvesting (EH) and cooperative caching. Unlike prior hierarchical DRL approaches that focus primarily on spectrum access or power control, our work jointly optimizes EH, hybrid access coordination, power allocation, and caching in a unified framework. The joint optimization problem is formulated as a weighted-sum multi-objective task, designed to maximize throughput and cache hit ratio while simultaneously minimizing transmission delay. In the proposed model, CIoT agents jointly optimize EH and data transmission using a learnable time switching (TS) factor. They also coordinate spectrum access under hybrid overlay-underlay paradigms and make power control and cache placement decisions while considering energy, interference, and storage constraints. Specifically, in this work, cooperative caching is used to enable overlay access, while power control is used for underlay access. A novel three-level hierarchical SAC (H-SAC) agent decomposes the mixed discrete-continuous action space into modular subproblems, improving scalability and convergence over flat DRL methods. The high-level policy adjusts the TS factor, the mid-level policy manages spectrum access coordination and cache sharing, and the low-level policy decides transmit power and caching actions for both the CIoT agent and PU content. Simulation results show that the proposed hierarchical SAC approach significantly outperforms benchmark and greedy strategies. It achieves better performance in terms of average sum rate, delay, cache hit ratio, and energy efficiency, even under channel fading and uncertain conditions.
Abstract:In cognitive Internet of Things (CIoT) networks, efficient spectrum sharing is essential to address increasing wireless demands. This paper presents a novel deep reinforcement learning (DRL)-based approach for joint cooperative caching and spectrum access coordination in CIoT networks, enabling the CIoT agents to collaborate with primary users (PUs) by caching PU content and serving their requests, fostering mutual benefits. The proposed DRL framework jointly optimizes caching policy and spectrum access under challenging conditions. Unlike traditional cognitive radio (CR) methods, where CIoT agents vacate the spectrum for PUs, or relaying techniques, which merely support spectrum sharing, caching brings data closer to the edge, reducing latency by minimizing retrieval distance. Simulations demonstrate that our approach outperforms others in lowering latency, increasing CIoT and PU cache hit rates, and enhancing network throughput. This approach redefines spectrum sharing, offering a fresh perspective on CIoT network design and illustrating the potential of DRL-guided caching to highlight the benefits of collaboration over dynamic spectrum access scenarios, elevating CIoT performance under constrained resources.