The Internet is the most complex machine humankind has ever built, and how to defense it from intrusions is even more complex. With the ever increasing of new intrusions, intrusion detection task rely on Artificial Intelligence more and more. Interpretability and transparency of the machine learning model is the foundation of trust in AI-driven intrusion detection results. Current interpretation Artificial Intelligence technologies in intrusion detection are heuristic, which is neither accurate nor sufficient. This paper proposed a rigorous interpretable Artificial Intelligence driven intrusion detection approach, based on artificial immune system. Details of rigorous interpretation calculation process for a decision tree model is presented. Prime implicant explanation for benign traffic flow are given in detail as rule for negative selection of the cyber immune system. Experiments are carried out in real-life traffic.
Intelligent reflecting surface (IRS) as a promising technology rendering high throughput in future communication systems is compatible with various communication techniques such as non-orthogonal multiple-access (NOMA). In this paper, the downlink transmission of IRS-assisted NOMA communication is considered while undergoing imperfect channel state information (CSI). Consequently, a robust IRS-aided NOMA design is proposed by solving the sum-rate maximization problem to jointly find the optimal beamforming vectors for the access point and the passive reflection matrix for the IRS, using the penalty dual decomposition (PDD) scheme. This problem can be solved through an iterative algorithm, with closed-form solutions in each step, and it is shown to have very close performance to its upper bound obtained from perfect CSI scenario. We also present a trellis-based method for optimal discrete phase shift selection of IRS which is shown to outperform the conventional quantization method. Our results show that the proposed algorithms, for both continuous and discrete IRS, have very low computational complexity compared to other schemes in the literature. Furthermore, we conduct a performance comparison from achievable sum-rate standpoint between IRS-aided NOMA and IRS-aided orthogonal multiple access (OMA), which demonstrates superiority of NOMA compared to OMA in case of a tolerated channel uncertainty.
In this paper, we investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks. In this system, the IoT devices can collaboratively train a shared model without compromising data privacy. However, due to limited resources in the industrial IoT networks, including computational power, bandwidth, and channel state, it is challenging for many devices to accomplish local training and upload weights to the edge server in time. To address this issue, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework, where the deep model can be divided into several sub-models with different depths and output prediction from the exit in the corresponding sub-model. In this way, the devices with insufficient computational power can choose the earlier exits and avoid training the complete model, which can help reduce computational latency and enable devices to participate into aggregation as much as possible within a latency threshold. Moreover, we propose a greedy approach-based exit selection and bandwidth allocation algorithm to maximize the total number of exits in each communication round. Simulation experiments are conducted on the classical Fashion-MNIST dataset under a non-independent and identically distributed (non-IID) setting, and it shows that the proposed strategy outperforms the conventional FL. In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
Antenna arrays have a long history of more than 100 years and have evolved closely with the development of electronic and information technologies, playing an indispensable role in wireless communications and radar. With the rapid development of electronic and information technologies, the demand for all-time, all-domain, and full-space network services has exploded, and new communication requirements have been put forward on various space/air/ground platforms. To meet the ever increasing requirements of the future sixth generation (6G) wireless communications, such as high capacity, wide coverage, low latency, and strong robustness, it is promising to employ different types of antenna arrays with various beamforming technologies in space/air/ground communication networks, bringing in advantages such as considerable antenna gains, multiplexing gains, and diversity gains. However, enabling antenna array for space/air/ground communication networks poses specific, distinctive and tricky challenges, which has aroused extensive research attention. This paper aims to overview the field of antenna array enabled space/air/ground communications and networking. The technical potentials and challenges of antenna array enabled space/air/ground communications and networking are presented first. Subsequently, the antenna array structures and designs are discussed. We then discuss various emerging technologies facilitated by antenna arrays to meet the new communication requirements of space/air/ground communication systems. Enabled by these emerging technologies, the distinct characteristics, challenges, and solutions for space communications, airborne communications, and ground communications are reviewed. Finally, we present promising directions for future research in antenna array enabled space/air/ground communications and networking.
Although the frequency-division duplex (FDD) massive multiple-input multiple-output (MIMO) system can offer high spectral and energy efficiency, it requires to feedback the downlink channel state information (CSI) from users to the base station (BS), in order to fulfill the precoding design at the BS. However, the large dimension of CSI matrices in the massive MIMO system makes the CSI feedback very challenging, and it is urgent to compress the feedback CSI. To this end, this paper proposes a novel dilated convolution based CSI feedback network, namely DCRNet. Specifically, the dilated convolutions are used to enhance the receptive field (RF) of the proposed DCRNet without increasing the convolution size. Moreover, advanced encoder and decoder blocks are designed to improve the reconstruction performance and reduce computational complexity as well. Numerical results are presented to show the superiority of the proposed DCRNet over the conventional networks. In particular, the proposed DCRNet can achieve almost the state-of-the-arts (SOTA) performance with much lower floating point operations (FLOPs). The open source code and checkpoint of this work are available at https://github.com/recusant7/DCRNet.
Reconfigurable intelligent surface (RIS) has been regarded as a promising tool to strengthen the quality of signal transmissions in non-orthogonal multiple access (NOMA) networks. This article introduces a heterogeneous network (HetNet) structure into RIS-aided NOMA multi-cell networks. A practical user equipment (UE) association scheme for maximizing the average received power is adopted. To evaluate system performance, we provide a stochastic geometry based analytical framework, where the locations of RISs, base stations (BSs), and UEs are modeled as homogeneous Poisson point processes (PPPs). Based on this framework, we first derive the closed-form probability density function (PDF) to characterize the distribution of the reflective links created by RISs. Then, both the exact expressions and upper/lower bounds of UE association probability are calculated. Lastly, the analytical expressions of the signal-to-interference-plus-noise-ratio (SINR) and rate coverage probability are deduced. Additionally, to investigate the impact of RISs on system coverage, the asymptotic expressions of two coverage probabilities are derived. The theoretical results show that RIS length is not the decisive factor for coverage improvement. Numerical results demonstrate that the proposed RIS HetNet structure brings significant enhancement in rate coverage. Moreover, there exists an optimal combination of RISs and BSs deployment densities to maximize coverage probability.
In this paper, the problem of pilot contamination in a multi-cell massive multiple input multiple output (M-MIMO) system is addressed using deep reinforcement learning (DRL). To this end, a pilot assignment strategy is designed that adapts to the channel variations while maintaining a tolerable pilot contamination effect. Using the angle of arrival (AoA) information of the users, a cost function, portraying the reward, is presented, defining the pilot contamination effects in the system. Numerical results illustrate that the DRL-based scheme is able to track the changes in the environment, learn the near-optimal pilot assignment, and achieve a close performance to that of the optimum pilot assignment performed by exhaustive search, while maintaining a low computational complexity.
With the growing demand for latency-critical and computation-intensive Internet of Things (IoT) services, mobile edge computing (MEC) has emerged as a promising technique to reinforce the computation capability of the resource-constrained mobile devices. To exploit the cloud-like functions at the network edge, service caching has been implemented to (partially) reuse the computation tasks, thus effectively reducing the delay incurred by data retransmissions and/or the computation burden due to repeated execution of the same task. In a multiuser cache-assisted MEC system, designs for service caching depend on users' preference for different types of services, which is at times highly correlated to the locations where the requests are made. In this paper, we exploit users' location-dependent service preference profiles to formulate a cache placement optimization problem in a multiuser MEC system. Specifically, we consider multiple representative locations, where users at the same location share the same preference profile for a given set of services. In a frequency-division multiple access (FDMA) setup, we jointly optimize the binary cache placement, edge computation resources and bandwidth allocation to minimize the expected weighted-sum energy of the edge server and the users with respect to the users' preference profile, subject to the bandwidth and the computation limitations, and the latency constraints. To effectively solve the mixed-integer non-convex problem, we propose a deep learning based offline cache placement scheme using a novel stochastic quantization based discrete-action generation method. In special cases, we also attain suboptimal caching decisions with low complexity leveraging the structure of the optimal solution. The simulations verify the performance of the proposed scheme and the effectiveness of service caching in general.
Industrial Internet of Things (IIoT) revolutionizes the future manufacturing facilities by integrating the Internet of Things technologies into industrial settings. With the deployment of massive IIoT devices, it is difficult for the wireless network to support the ubiquitous connections with diverse quality-of-service (QoS) requirements. Although machine learning is regarded as a powerful data-driven tool to optimize wireless network, how to apply machine learning to deal with the massive IIoT problems with unique characteristics remains unsolved. In this paper, we first summarize the QoS requirements of the typical massive non-critical and critical IIoT use cases. We then identify unique characteristics in the massive IIoT scenario, and the corresponding machine learning solutions with its limitations and potential research directions. We further present the existing machine learning solutions for individual layer and cross-layer problems in massive IIoT. Last but not the least, we present a case study of massive access problem based on deep neural network and deep reinforcement learning techniques, respectively, to validate the effectiveness of machine learning in massive IIoT scenario.