Alert button
Picture for Aiman Erbad

Aiman Erbad

Alert button

Adaptive ResNet Architecture for Distributed Inference in Resource-Constrained IoT Systems

Jul 21, 2023
Fazeela Mazhar Khan, Emna Baccour, Aiman Erbad, Mounir Hamdi

As deep neural networks continue to expand and become more complex, most edge devices are unable to handle their extensive processing requirements. Therefore, the concept of distributed inference is essential to distribute the neural network among a cluster of nodes. However, distribution may lead to additional energy consumption and dependency among devices that suffer from unstable transmission rates. Unstable transmission rates harm real-time performance of IoT devices causing low latency, high energy usage, and potential failures. Hence, for dynamic systems, it is necessary to have a resilient DNN with an adaptive architecture that can downsize as per the available resources. This paper presents an empirical study that identifies the connections in ResNet that can be dropped without significantly impacting the model's performance to enable distribution in case of resource shortage. Based on the results, a multi-objective optimization problem is formulated to minimize latency and maximize accuracy as per available resources. Our experiments demonstrate that an adaptive ResNet architecture can reduce shared data, energy consumption, and latency throughout the distribution while maintaining high accuracy.

* Accepted in the International Wireless Communications & Mobile Computing Conference (IWCMC 2023) 
Viaarxiv icon

Zero-touch realization of Pervasive Artificial Intelligence-as-a-service in 6G networks

Jul 21, 2023
Emna Baccour, Mhd Saria Allahham, Aiman Erbad, Amr Mohamed, Ahmed Refaey Hussein, Mounir Hamdi

The vision of the upcoming 6G technologies, characterized by ultra-dense network, low latency, and fast data rate is to support Pervasive AI (PAI) using zero-touch solutions enabling self-X (e.g., self-configuration, self-monitoring, and self-healing) services. However, the research on 6G is still in its infancy, and only the first steps have been taken to conceptualize its design, investigate its implementation, and plan for use cases. Toward this end, academia and industry communities have gradually shifted from theoretical studies of AI distribution to real-world deployment and standardization. Still, designing an end-to-end framework that systematizes the AI distribution by allowing easier access to the service using a third-party application assisted by a zero-touch service provisioning has not been well explored. In this context, we introduce a novel platform architecture to deploy a zero-touch PAI-as-a-Service (PAIaaS) in 6G networks supported by a blockchain-based smart system. This platform aims to standardize the pervasive AI at all levels of the architecture and unify the interfaces in order to facilitate the service deployment across application and infrastructure domains, relieve the users worries about cost, security, and resource allocation, and at the same time, respect the 6G stringent performance requirements. As a proof of concept, we present a Federated Learning-as-a-service use case where we evaluate the ability of our proposed system to self-optimize and self-adapt to the dynamics of 6G networks in addition to minimizing the users' perceived costs.

* in IEEE Communications Magazine, vol. 61, no. 2, pp. 110-116, 2023  
* IEEE Communications Magazine 
Viaarxiv icon

LLHR: Low Latency and High Reliability CNN Distributed Inference for Resource-Constrained UAV Swarms

May 25, 2023
Marwan Dhuheir, Aiman Erbad, Sinan Sabeeh

Figure 1 for LLHR: Low Latency and High Reliability CNN Distributed Inference for Resource-Constrained UAV Swarms
Figure 2 for LLHR: Low Latency and High Reliability CNN Distributed Inference for Resource-Constrained UAV Swarms
Figure 3 for LLHR: Low Latency and High Reliability CNN Distributed Inference for Resource-Constrained UAV Swarms
Figure 4 for LLHR: Low Latency and High Reliability CNN Distributed Inference for Resource-Constrained UAV Swarms

Recently, Unmanned Aerial Vehicles (UAVs) have shown impressive performance in many critical applications, such as surveillance, search and rescue operations, environmental monitoring, etc. In many of these applications, the UAVs capture images as well as other sensory data and then send the data processing requests to remote servers. Nevertheless, this approach is not always practical in real-time-based applications due to unstable connections, limited bandwidth, limited energy, and strict end-to-end latency. One promising solution is to divide the inference requests into subtasks that can be distributed among UAVs in a swarm based on the available resources. Moreover, these tasks create intermediate results that need to be transmitted reliably as the swarm moves to cover the area. Our system model deals with real-time requests, aiming to find the optimal transmission power that guarantees higher reliability and low latency. We formulate the Low Latency and High-Reliability (LLHR) distributed inference as an optimization problem, and due to the complexity of the problem, we divide it into three subproblems. In the first subproblem, we find the optimal transmit power of the connected UAVs with guaranteed transmission reliability. The second subproblem aims to find the optimal positions of the UAVs in the grid, while the last subproblem finds the optimal placement of the CNN layers in the available UAVs. We conduct extensive simulations and compare our work to two baseline models demonstrating that our model outperforms the competing models.

* In2023 IEEE Wireless Communications and Networking Conference (WCNC) 2023 Mar 26 (pp. 1-6). IEEE  
* arXiv admin note: substantial text overlap with arXiv:2212.11201 
Viaarxiv icon

Optimal Resource Management for Hierarchical Federated Learning over HetNets with Wireless Energy Transfer

May 03, 2023
Rami Hamdi, Ahmed Ben Said, Emna Baccour, Aiman Erbad, Amr Mohamed, Mounir Hamdi, Mohsen Guizani

Figure 1 for Optimal Resource Management for Hierarchical Federated Learning over HetNets with Wireless Energy Transfer
Figure 2 for Optimal Resource Management for Hierarchical Federated Learning over HetNets with Wireless Energy Transfer
Figure 3 for Optimal Resource Management for Hierarchical Federated Learning over HetNets with Wireless Energy Transfer
Figure 4 for Optimal Resource Management for Hierarchical Federated Learning over HetNets with Wireless Energy Transfer

Remote monitoring systems analyze the environment dynamics in different smart industrial applications, such as occupational health and safety, and environmental monitoring. Specifically, in industrial Internet of Things (IoT) systems, the huge number of devices and the expected performance put pressure on resources, such as computational, network, and device energy. Distributed training of Machine and Deep Learning (ML/DL) models for intelligent industrial IoT applications is very challenging for resource limited devices over heterogeneous wireless networks (HetNets). Hierarchical Federated Learning (HFL) performs training at multiple layers offloading the tasks to nearby Multi-Access Edge Computing (MEC) units. In this paper, we propose a novel energy-efficient HFL framework enabled by Wireless Energy Transfer (WET) and designed for heterogeneous networks with massive Multiple-Input Multiple-Output (MIMO) wireless backhaul. Our energy-efficiency approach is formulated as a Mixed-Integer Non-Linear Programming (MINLP) problem, where we optimize the HFL device association and manage the wireless transmitted energy. However due to its high complexity, we design a Heuristic Resource Management Algorithm, namely H2RMA, that respects energy, channel quality, and accuracy constraints, while presenting a low computational complexity. We also improve the energy consumption of the network using an efficient device scheduling scheme. Finally, we investigate device mobility and its impact on the HFL performance. Our extensive experiments confirm the high performance of the proposed resource management approach in HFL over HetNets, in terms of training loss and grid energy costs.

* IEEE Internet of Things Journal, 2023  
Viaarxiv icon

Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems

Jan 09, 2023
Nora Abdelsalam, Saif Al-Kuwari, Aiman Erbad

Figure 1 for Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems
Figure 2 for Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems
Figure 3 for Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems
Figure 4 for Physical Layer Security in Satellite Communication: State-of-the-art and Open Problems

Satellite communications emerged as a promising extension to terrestrial networks in future 6G network research due to their extensive coverage in remote areas and ability to support the increasing traffic rate and heterogeneous networks. Like other wireless communication technologies, satellite signals are transmitted in a shared medium, making them vulnerable to attacks, such as eavesdropping, jamming, and spoofing. A good candidate to overcome these issues is physical layer security (PLS), which utilizes physical layer characteristics to provide security, especially due to its suitability for resource-limited devices such as satellites and IoT devices. In this paper, we provide a thorough and up-to-date review of PLS solutions for securing satellite communication. We classify main satellite applications into five domains, namely: Satellite-terrestrial, satellite-based IoT, Satellite navigation systems, FSO-based, and inter-satellite. In each domain, we discuss and investigate how PLS can be used to improve the system's overall security, preserve some desirable security properties and resist popular attacks. Finally, we highlight a few gaps in the related literature and discuss open research problems and opportunities for leveraging PLS in satellite communication.

Viaarxiv icon

Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms

Dec 21, 2022
Marwan Dhuheir, Emna Baccour, Aiman Erbad, Sinan Sabeeh Al-Obaidi, Mounir Hamdi

Figure 1 for Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms
Figure 2 for Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms
Figure 3 for Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms
Figure 4 for Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms

The deployment flexibility and maneuverability of Unmanned Aerial Vehicles (UAVs) increased their adoption in various applications, such as wildfire tracking, border monitoring, etc. In many critical applications, UAVs capture images and other sensory data and then send the captured data to remote servers for inference and data processing tasks. However, this approach is not always practical in real-time applications due to the connection instability, limited bandwidth, and end-to-end latency. One promising solution is to divide the inference requests into multiple parts (layers or segments), with each part being executed in a different UAV based on the available resources. Furthermore, some applications require the UAVs to traverse certain areas and capture incidents; thus, planning their paths becomes critical particularly, to reduce the latency of making the collaborative inference process. Specifically, planning the UAVs trajectory can reduce the data transmission latency by communicating with devices in the same proximity while mitigating the transmission interference. This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm while respecting the resource constraints due to the computational load and memory usage of the inference requests. The model is formulated as an optimization problem and aims to minimize latency. The formulated problem is NP-hard so finding the optimal solution is quite complex; thus, this paper introduces a real-time and dynamic solution for online applications using deep reinforcement learning. We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.

* accepted journal paper at IEEE Internet of Things Journal 
Viaarxiv icon

RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems

Aug 27, 2022
Emna Baccour, Aiman Erbad, Amr Mohamed, Mounir Hamdi, Mohsen Guizani

Figure 1 for RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems
Figure 2 for RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems
Figure 3 for RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems
Figure 4 for RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems

Although Deep Neural Networks (DNN) have become the backbone technology of several ubiquitous applications, their deployment in resource-constrained machines, e.g., Internet of Things (IoT) devices, is still challenging. To satisfy the resource requirements of such a paradigm, collaborative deep inference with IoT synergy was introduced. However, the distribution of DNN networks suffers from severe data leakage. Various threats have been presented, including black-box attacks, where malicious participants can recover arbitrary inputs fed into their devices. Although many countermeasures were designed to achieve privacy-preserving DNN, most of them result in additional computation and lower accuracy. In this paper, we present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy, without sacrificing the model performance. Particularly, we examine different DNN partitions that make the model susceptible to black-box threats and we derive the amount of data that should be allocated per device to hide proprieties of the original input. We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data. Next, to relax the optimal solution, we shape our approach as a Reinforcement Learning (RL) design that supports heterogeneous devices as well as multiple DNNs/datasets.

* Volume: 9, Issue: 4, 01 July-Aug. 2022  
* Published in IEEE Transactions on Network Science and Engineering 
Viaarxiv icon

Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A Stackelberg Game Approach

Sep 25, 2021
Mhd Saria Allahham, Sameh Sorour, Amr Mohamed, Aiman Erbad, Mohsen Guizani

Figure 1 for Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A Stackelberg Game Approach
Figure 2 for Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A Stackelberg Game Approach

Mobile Edge Learning (MEL) is a learning paradigm that enables distributed training of Machine Learning models over heterogeneous edge devices (e.g., IoT devices). Multi-orchestrator MEL refers to the coexistence of multiple learning tasks with different datasets, each of which being governed by an orchestrator to facilitate the distributed training process. In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources. Therefore, it is crucial to motivate edge devices to become learners and offer their computing resources, and either offer their private data or receive the needed data from the orchestrator and participate in the training process of a learning task. In this work, we propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game to motivate the participation of the learners. In the first round, the learners decide which learning task to get engaged in, and then in the second round, the amount of data for training in case of participation such that their utility is maximized. We then study the game analytically and derive the learners' optimal strategy. Finally, numerical experiments have been conducted to evaluate the performance of the proposed incentive mechanism.

Viaarxiv icon

LoRa-RL: Deep Reinforcement Learning for Resource Management in Hybrid Energy LoRa Wireless Networks

Sep 06, 2021
Rami Hamdi, Emna Baccour, Aiman Erbad, Marwa Qaraqe, Mounir Hamdi

Figure 1 for LoRa-RL: Deep Reinforcement Learning for Resource Management in Hybrid Energy LoRa Wireless Networks
Figure 2 for LoRa-RL: Deep Reinforcement Learning for Resource Management in Hybrid Energy LoRa Wireless Networks
Figure 3 for LoRa-RL: Deep Reinforcement Learning for Resource Management in Hybrid Energy LoRa Wireless Networks
Figure 4 for LoRa-RL: Deep Reinforcement Learning for Resource Management in Hybrid Energy LoRa Wireless Networks

LoRa wireless networks are considered as a key enabling technology for next generation internet of things (IoT) systems. New IoT deployments (e.g., smart city scenarios) can have thousands of devices per square kilometer leading to huge amount of power consumption to provide connectivity. In this paper, we investigate green LoRa wireless networks powered by a hybrid of the grid and renewable energy sources, which can benefit from harvested energy while dealing with the intermittent supply. This paper proposes resource management schemes of the limited number of channels and spreading factors (SFs) with the objective of improving the LoRa gateway energy efficiency. First, the problem of grid power consumption minimization while satisfying the system's quality of service demands is formulated. Specifically, both scenarios the uncorrelated and time-correlated channels are investigated. The optimal resource management problem is solved by decoupling the formulated problem into two sub-problems: channel and SF assignment problem and energy management problem. Since the optimal solution is obtained with high complexity, online resource management heuristic algorithms that minimize the grid energy consumption are proposed. Finally, taking into account the channel and energy correlation, adaptable resource management schemes based on Reinforcement Learning (RL), are developed. Simulations results show that the proposed resource management schemes offer efficient use of renewable energy in LoRa wireless networks.

* IEEE Internet of Things Journal, to appear 
Viaarxiv icon

Energy-Efficient Multi-Orchestrator Mobile Edge Learning

Sep 02, 2021
Mhd Saria Allahham, Sameh Sorour, Amr Mohamed, Aiman Erbad, Mohsen Guizani

Figure 1 for Energy-Efficient Multi-Orchestrator Mobile Edge Learning
Figure 2 for Energy-Efficient Multi-Orchestrator Mobile Edge Learning
Figure 3 for Energy-Efficient Multi-Orchestrator Mobile Edge Learning
Figure 4 for Energy-Efficient Multi-Orchestrator Mobile Edge Learning

Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices (e.g., IoT devices). In MEL, possible coexistence of multiple learning tasks with different datasets may arise. The heterogeneity in edge devices' capabilities will require the joint optimization of the learners-orchestrator association and task allocation. To this end, we aim to develop an energy-efficient framework for learners-orchestrator association and learning task allocation, in which each orchestrator gets associated with a group of learners with the same learning task based on their communication channel qualities and computational resources, and allocate the tasks accordingly. Therein, a multi objective optimization problem is formulated to minimize the total energy consumption and maximize the learning tasks' accuracy. However, solving such optimization problem requires centralization and the presence of the whole environment information at a single entity, which becomes impractical in large-scale systems. To reduce the solution complexity and to enable solution decentralization, we propose lightweight heuristic algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity. Simulation results show that the proposed approaches reduce the energy consumption significantly while executing multiple learning tasks compared to recent state-of-the-art methods.

Viaarxiv icon