Alert button
Picture for Jeffrey H. Reed

Jeffrey H. Reed

Alert button

Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks

Mar 06, 2023
Taiwo Oyedare, Vijay K. Shah, Daniel J. Jakubisin, Jeffrey H. Reed

Figure 1 for Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks
Figure 2 for Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks
Figure 3 for Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks
Figure 4 for Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks

The growing number of devices using the wireless spectrum makes it important to find ways to minimize interference and optimize the use of the spectrum. Deep learning models, such as convolutional neural networks (CNNs), have been widely utilized to identify, classify, or mitigate interference due to their ability to learn from the data directly. However, there have been limited research on the complexity of such deep learning models. The major focus of deep learning-based wireless classification literature has been on improving classification accuracy, often at the expense of model complexity. This may not be practical for many wireless devices, such as, internet of things (IoT) devices, which usually have very limited computational resources and cannot handle very complex models. Thus, it becomes important to account for model complexity when designing deep learning-based models for interference classification. To address this, we conduct an analysis of CNN based wireless classification that explores the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty: namely, interference classification, heterogeneous transmitter classification, and homogeneous transmitter classification. Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model, providing important insights into the use of CNNs in computationally constrained applications.

* 6 pages, 7 figures, 3 tables 
Viaarxiv icon

Line-of-Sight Probability for Outdoor-to-Indoor UAV-Assisted Emergency Networks

Feb 27, 2023
Gaurav Duggal, R. Michael Buehrer, Nishith Tripathi, Jeffrey H. Reed

Figure 1 for Line-of-Sight Probability for Outdoor-to-Indoor UAV-Assisted Emergency Networks
Figure 2 for Line-of-Sight Probability for Outdoor-to-Indoor UAV-Assisted Emergency Networks
Figure 3 for Line-of-Sight Probability for Outdoor-to-Indoor UAV-Assisted Emergency Networks
Figure 4 for Line-of-Sight Probability for Outdoor-to-Indoor UAV-Assisted Emergency Networks

For emergency response scenarios like firefighting in urban environments, there is a need to both localize emergency responders inside the building and also support a high bandwidth communication link between the responders and a command-and-control center. The emergency networks for such scenarios can be established with the quick deployment of Unmanned Aerial Vehicles (UAVs). Further, the 3D mobility of UAVs can be leveraged to improve the quality of the wireless link by maneuvering them into advantageous locations. This has motivated recent propagation measurement campaigns to study low-altitude air-to-ground channels in both 5G-sub6 GHz and 5G-mmWave bands. In this paper, we develop a model for the link in a UAV-assisted emergency location and/or communication system. Specifically, given the importance of Line-of-Sight (LoS) links in localization as well as mmWave communication, we derive a closed-form expression for the LoS probability. This probability is parameterized by the UAV base station location, the size of the building, and the size of the window that offers the best propagation path. An expression for coverage probability is also derived. The LoS probability and coverage probabilities derived in this paper can be used to analyze the outdoor UAV-to-indoor propagation environment to determine optimal UAV positioning and the number of UAVs needed to achieve the desired performance of the emergency network.

* Accepted to be published in IEEE ICC 2023, Rome, Italy 
Viaarxiv icon

Probability-Reduction of Geolocation using Reconfigurable Intelligent Surface Reflections

Oct 18, 2022
Anders M. Buvarp, Daniel J. Jakubisin, William C. Headley, Jeffrey H. Reed

Figure 1 for Probability-Reduction of Geolocation using Reconfigurable Intelligent Surface Reflections
Figure 2 for Probability-Reduction of Geolocation using Reconfigurable Intelligent Surface Reflections
Figure 3 for Probability-Reduction of Geolocation using Reconfigurable Intelligent Surface Reflections
Figure 4 for Probability-Reduction of Geolocation using Reconfigurable Intelligent Surface Reflections

With the recent introduction of electromagnetic meta-surfaces and reconfigurable intelligent surfaces, a paradigm shift is currently taking place in the world of wireless communications and related industries. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. This is of great interest as we transition from 5G mobile communications towards 6G. In this paper, we explore the possibility of using a reconfigurable intelligent surface in order to disrupt the ability of an unintended receiver to geolocate the source of transmitted signals in a 5G communication system. We investigate how the performance of the MUSIC algorithm at the unintended receiver is degraded by correlated reflected signals introduced by a reconfigurable intelligent surface in the wireless channel. We analyze the impact of the direction of arrival, delay, correlation, and strength of the reconfigurable intelligent surface signal with respect to the line-of-sight path from the transmitter to the unintended receiver. An effective method is introduced for defeating direction-finding efforts using dual sets of surface reflections. This novel method is called Geolocation-Probability Reduction using Dual Reconfigurable Intelligent Surfaces (GPRIS). We also show that the efficiency of this method is highly dependent on the geometry, that is, the placement of the reconfigurable intelligent surface relative to the unintended receiver and the transmitter.

* 6 pages, 13 figures, 1 table, submitted to 2023 IEEE Wireless Communications and Networking Conference 
Viaarxiv icon

Learning based Age of Information Minimization in UAV-relayed IoT Networks

Mar 08, 2022
Biplav Choudhury, Prasenjit Karmakar, Vijay K. Shah, Jeffrey H. Reed

Figure 1 for Learning based Age of Information Minimization in UAV-relayed IoT Networks
Figure 2 for Learning based Age of Information Minimization in UAV-relayed IoT Networks
Figure 3 for Learning based Age of Information Minimization in UAV-relayed IoT Networks
Figure 4 for Learning based Age of Information Minimization in UAV-relayed IoT Networks

Unmanned Aerial Vehicles (UAVs) are used as aerial base-stations to relay time-sensitive packets from IoT devices to the nearby terrestrial base-station (TBS). Scheduling of packets in such UAV-relayed IoT-networks to ensure fresh (or up-to-date) IoT devices' packets at the TBS is a challenging problem as it involves two simultaneous steps of (i) sampling of packets generated at IoT devices by the UAVs [hop-1] and (ii) updating of sampled packets from UAVs to the TBS [hop-2]. To address this, we propose Age-of-Information (AoI) scheduling algorithms for two-hop UAV-relayed IoT-networks. First, we propose a low-complexity AoI scheduler, termed, MAF-MAD that employs Maximum AoI First (MAF) policy for sampling of IoT devices at UAV (hop-1) and Maximum AoI Difference (MAD) policy for updating sampled packets from UAV to the TBS (hop-2). We prove that MAF-MAD is the optimal AoI scheduler under ideal conditions (lossless wireless channels and generate-at-will traffic-generation at IoT devices). On the contrary, for general conditions (lossy channel conditions and varying periodic traffic-generation at IoT devices), a deep reinforcement learning algorithm, namely, Proximal Policy Optimization (PPO)-based scheduler is proposed. Simulation results show that the proposed PPO-based scheduler outperforms other schedulers like MAF-MAD, MAF, and round-robin in all considered general scenarios.

Viaarxiv icon

Predictive Closed-Loop Service Automation in O-RAN based Network Slicing

Feb 04, 2022
Joseph Thaliath, Solmaz Niknam, Sukhdeep Singh, Rahul Banerji, Navrati Saxena, Harpreet S. Dhillon, Jeffrey H. Reed, Ali Kashif Bashir, Avinash Bhat, Abhishek Roy

Figure 1 for Predictive Closed-Loop Service Automation in O-RAN based Network Slicing
Figure 2 for Predictive Closed-Loop Service Automation in O-RAN based Network Slicing
Figure 3 for Predictive Closed-Loop Service Automation in O-RAN based Network Slicing
Figure 4 for Predictive Closed-Loop Service Automation in O-RAN based Network Slicing

Network slicing provides introduces customized and agile network deployment for managing different service types for various verticals under the same infrastructure. To cater to the dynamic service requirements of these verticals and meet the required quality-of-service (QoS) mentioned in the service-level agreement (SLA), network slices need to be isolated through dedicated elements and resources. Additionally, allocated resources to these slices need to be continuously monitored and intelligently managed. This enables immediate detection and correction of any SLA violation to support automated service assurance in a closed-loop fashion. By reducing human intervention, intelligent and closed-loop resource management reduces the cost of offering flexible services. Resource management in a network shared among verticals (potentially administered by different providers), would be further facilitated through open and standardized interfaces. Open radio access network (O-RAN) is perhaps the most promising RAN architecture that inherits all the aforementioned features, namely intelligence, open and standard interfaces, and closed control loop. Inspired by this, in this article we provide a closed-loop and intelligent resource provisioning scheme for O-RAN slicing to prevent SLA violations. In order to maintain realism, a real-world dataset of a large operator is used to train a learning solution for optimizing resource utilization in the proposed closed-loop service automation process. Moreover, the deployment architecture and the corresponding flow that are cognizant of the O-RAN requirements are also discussed.

* 7 pages, 3 figures, 1 table 
Viaarxiv icon

AoI-minimizing Scheduling in UAV-relayed IoT Networks

Jul 30, 2021
Biplav Choudhury, Vijay K. Shah, Aidin Ferdowsi, Jeffrey H. Reed, Y. Thomas Hou

Figure 1 for AoI-minimizing Scheduling in UAV-relayed IoT Networks
Figure 2 for AoI-minimizing Scheduling in UAV-relayed IoT Networks
Figure 3 for AoI-minimizing Scheduling in UAV-relayed IoT Networks
Figure 4 for AoI-minimizing Scheduling in UAV-relayed IoT Networks

Due to flexibility, autonomy and low operational cost, unmanned aerial vehicles (UAVs), as fixed aerial base stations, are increasingly being used as \textit{relays} to collect time-sensitive information (i.e., status updates) from IoT devices and deliver it to the nearby terrestrial base station (TBS), where the information gets processed. In order to ensure timely delivery of information to the TBS (from all IoT devices), optimal scheduling of time-sensitive information over two hop UAV-relayed IoT networks (i.e., IoT device to the UAV [hop 1], and UAV to the TBS [hop 2]) becomes a critical challenge. To address this, we propose scheduling policies for Age of Information (AoI) minimization in such two-hop UAV-relayed IoT networks. To this end, we present a low-complexity MAF-MAD scheduler, that employs Maximum AoI First (MAF) policy for sampling of IoT devices at UAV (hop 1) and Maximum AoI Difference (MAD) policy for updating sampled packets from UAV to the TBS (hop 2). We show that MAF-MAD is the optimal scheduler under ideal conditions, i.e., error-free channels and generate-at-will traffic generation at IoT devices. On the contrary, for realistic conditions, we propose a Deep-Q-Networks (DQN) based scheduler. Our simulation results show that DQN-based scheduler outperforms MAF-MAD scheduler and three other baseline schedulers, i.e., Maximal AoI First (MAF), Round Robin (RR) and Random, employed at both hops under general conditions when the network is small (with 10's of IoT devices). However, it does not scale well with network size whereas MAF-MAD outperforms all other schedulers under all considered scenarios for larger networks.

Viaarxiv icon

RAN Slicing in Multi-MVNO Environment under Dynamic Channel Conditions

Apr 11, 2021
Darshan A. Ravi, Vijay K. Shah, Chengzhang Li, Tom Hou, Jeffrey H. Reed

Figure 1 for RAN Slicing in Multi-MVNO Environment under Dynamic Channel Conditions
Figure 2 for RAN Slicing in Multi-MVNO Environment under Dynamic Channel Conditions
Figure 3 for RAN Slicing in Multi-MVNO Environment under Dynamic Channel Conditions
Figure 4 for RAN Slicing in Multi-MVNO Environment under Dynamic Channel Conditions

With the increasing diversity in the requirement of wireless services with guaranteed quality of service(QoS), radio access network(RAN) slicing becomes an important aspect in implementation of next generation wireless systems(5G). RAN slicing involves division of network resources into many logical segments where each segment has specific QoS and can serve users of mobile virtual network operator(MVNO) with these requirements. This allows the Network Operator(NO) to provide service to multiple MVNOs each with different service requirements. Efficient allocation of the available resources to slices becomes vital in determining number of users and therefore, number of MVNOs that a NO can support. In this work, we study the problem of Modulation and Coding Scheme(MCS) aware RAN slicing(MaRS) in the context of a wireless system having MVNOs which have users with minimum data rate requirement. Channel Quality Indicator(CQI) report sent from each user in the network determines the MCS selected, which in turn determines the achievable data rate. But the channel conditions might not remain the same for the entire duration of user being served. For this reason, we consider the channel conditions to be dynamic where the choice of MCS level varies at each time instant. We model the MaRS problem as a Non-Linear Programming problem and show that it is NP-Hard. Next, we propose a solution based on greedy algorithm paradigm. We then develop an upper performance bound for this problem and finally evaluate the performance of proposed solution by comparing against the upper bound under various channel and network configurations.

Viaarxiv icon

Deep Learning for Fast and Reliable Initial Access in AI-Driven 6G mmWave Networks

Jan 06, 2021
Tarun S. Cousik, Vijay K. Shah, Tugba Erpek, Yalin E. Sagduyu, Jeffrey H. Reed

Figure 1 for Deep Learning for Fast and Reliable Initial Access in AI-Driven 6G mmWave Networks
Figure 2 for Deep Learning for Fast and Reliable Initial Access in AI-Driven 6G mmWave Networks
Figure 3 for Deep Learning for Fast and Reliable Initial Access in AI-Driven 6G mmWave Networks
Figure 4 for Deep Learning for Fast and Reliable Initial Access in AI-Driven 6G mmWave Networks

We present DeepIA, a deep neural network (DNN) framework for enabling fast and reliable initial access for AI-driven beyond 5G and 6G millimeter (mmWave) networks. DeepIA reduces the beam sweep time compared to a conventional exhaustive search-based IA process by utilizing only a subset of the available beams. DeepIA maps received signal strengths (RSSs) obtained from a subset of beams to the beam that is best oriented to the receiver. In both line of sight (LoS) and non-line of sight (NLoS) conditions, DeepIA reduces the IA time and outperforms the conventional IA's beam prediction accuracy. We show that the beam prediction accuracy of DeepIA saturates with the number of beams used for IA and depends on the particular selection of the beams. In LoS conditions, the selection of the beams is consequential and improves the accuracy by up to 70%. In NLoS situations, it improves accuracy by up to 35%. We find that, averaging multiple RSS snapshots further reduces the number of beams needed and achieves more than 95% accuracy in both LoS and NLoS conditions. Finally, we evaluate the beam prediction time of DeepIA through embedded hardware implementation and show the improvement over the conventional beam sweeping.

* arXiv admin note: substantial text overlap with arXiv:2006.12653 
Viaarxiv icon