Energy consumption remains the main limiting factors in many IoT applications. In particular, micro-controllers consume far too much power. In order to overcome this problem, new circuit designs have been proposed and the use of spiking neurons and analog computing has emerged as it allows a very significant consumption reduction. However, working in the analog domain brings difficulty to handle the sequential processing of incoming signals as is needed in many use cases. In this paper, we use a bio-inspired phenomenon called Interacting Synapses to produce a time filter, without using non-biological techniques such as synaptic delays. We propose a model of neuron and synapses that fire for a specific range of delays between two incoming spikes, but do not react when this Inter-Spike Timing is not in that range. We study the parameters of the model to understand how to choose them and adapt the Inter-Spike Timing. The originality of the paper is to propose a new way, in the analog domain, to deal with temporal sequences.
Research into 6G networks has been initiated to support a variety of critical artificial intelligence (AI) assisted applications such as autonomous driving. In such applications, AI-based decisions should be performed in a real-time manner. These decisions include resource allocation, localization, channel estimation, etc. Considering the black-box nature of existing AI-based models, it is highly challenging to understand and trust the decision-making behavior of such models. Therefore, explaining the logic behind those models through explainable AI (XAI) techniques is essential for their employment in critical applications. This manuscript proposes a novel XAI-based channel estimation (XAI-CHEST) scheme that provides detailed reasonable interpretability of the deep learning (DL) models that are employed in doubly-selective channel estimation. The aim of the proposed XAI-CHEST scheme is to identify the relevant model inputs by inducing high noise on the irrelevant ones. As a result, the behavior of the studied DL-based channel estimators can be further analyzed and evaluated based on the generated interpretations. Simulation results show that the proposed XAI-CHEST scheme provides valid interpretations of the DL-based channel estimators for different scenarios.
These days we live in a world with a permanent electromagnetic field. This raises many questions about our health and the deployment of new equipment. The problem is that these fields remain difficult to visualize easily, which only some experts can understand. To tackle this problem, we propose to spatially estimate the level of the field based on a few observations at all positions of the considered space. This work presents an algorithm for spatial reconstruction of electromagnetic fields using the Gaussian Process. We consider a spatial, physical phenomenon observed by a sensor network. A Gaussian Process regression model with selected mean and covariance function is implemented to develop a 9 sensors-based estimation algorithm. A Bayesian inference approach is used to perform the model selection of the covariance function and to learn the hyperparameters from our data set. We present the prediction performance of the proposed model and compare it with the case where the mean is zero. The results show that the proposed Gaussian Process-based prediction model reconstructs the EM fields in all positions only using 9 sensors.
In this paper, we present a new receiver design, which significantly improves performance in the Internet of Things networks such as LoRa, i.e., having a chirp spread spectrum modulation. The proposed receiver is able to demodulate multiple users simultaneously transmitted over the same frequency channel with the same spreading factor. From a non-orthogonal multiple access point of view, it is based on the power domain and uses serial interference cancellation. Simulation results show that the receiver allows a significant increase in the number of connected devices in the network.