Neuropathies are gaining higher relevance in clinical settings, as they risk permanently jeopardizing a person's life. To support the recovery of patients, the use of fully implanted devices is emerging as one of the most promising solutions. However, these devices, even if becoming an integral part of a fully complex neural nanonetwork system, pose numerous challenges. In this article, we address one of them, which consists of the classification of motor/sensory stimuli. The task is performed by exploring four different types of artificial neural networks (ANNs) to extract various sensory stimuli from the electroneurographic (ENG) signal measured in the sciatic nerve of rats. Different sizes of the data sets are considered to analyze the feasibility of the investigated ANNs for real-time classification through a comparison of their performance in terms of accuracy, F1-score, and prediction time. The design of the ANNs takes advantage of the modelling of the ENG signal as a multiple-input multiple-output (MIMO) system to describe the measures taken by state-of-the-art implanted nerve interfaces. These are based on the use of multi-contact cuff electrodes to achieve nanoscale spatial discrimination of the nerve activity. The MIMO ENG signal model is another contribution of this paper. Our results show that some ANNs are more suitable for real-time applications, being capable of achieving accuracies over $90\%$ for signal windows of $100$ and $200\,$ms with a low enough processing time to be effective for pathology recovery.
This paper deals with radar imaging in non-line of sight (NLOS) with the aid of non-reconfigurable electromagnetic skins (NR-EMSs). NR-EMSs are passive metasurfaces whose reflection properties are defined during the manufacturing process, and represent a low-cost alternative to reconfigurable intelligent surfaces to implement advanced wave manipulations. We propose and discuss a multi-view near-field radar imaging system where a moving source progressively illuminates different portions of the NR-EMS, whereby each portion (\textit{module}) is purposely phase-configured to focus the impinging radiation over a desired NLOS area of interest. The source, e.g., a radar-equipped vehicle, synthesizes a wide aperture that maps onto the NR-EMS, allowing NLOS imaging with enhanced resolution compared to the standalone radar capabilities. Simulation results show the feasibility and benefits of such an imaging approach and shed light on a possible practical application of metasurfaces for sensing.
In Integrated Sensing and Communication (ISAC) systems, matching the radar targets with communication user equipments (UEs) is functional to several communication tasks, such as proactive handover and beam prediction. In this paper, we consider a radar-assisted communication system where a base station (BS) is equipped with a multiple-input-multiple-output (MIMO) radar that has a double aim: (i) associate vehicular radar targets to vehicular equipments (VEs) in the communication beamspace and (ii) predict the beamforming vector for each VE from radar data. The proposed target-to-user (T2U) association consists of two stages. First, vehicular radar targets are detected from range-angle images, and, for each, a beamforming vector is estimated. Then, the inferred per-target beamforming vectors are matched with the ones utilized at the BS for communication to perform target-to-user (T2U) association. Joint multi-target detection and beam inference is obtained by modifying the you only look once (YOLO) model, which is trained over simulated range-angle radar images. Simulation results over different urban vehicular mobility scenarios show that the proposed T2U method provides a probability of correct association that increases with the size of the BS antenna array, highlighting the respective increase of the separability of the VEs in the beamspace. Moreover, we show that the modified YOLO architecture can effectively perform both beam prediction and radar target detection, with similar performance in mean average precision on the latter over different antenna array sizes.
Integrated Sensing and Communication (ISAC) is one of the key pillars envisioned for 6G wireless systems. ISAC systems combine communication and sensing functionalities over a single waveform, with full resource sharing. In particular, waveform design for legacy Orthogonal Frequency Division Multiplexing (OFDM) systems consists of a suitable time-frequency resource allocation policy balancing between communication and sensing performance. Over time and/or frequency, having unused resources leads to an ambiguity function with high sidelobes that significantly affect the performance of ISAC for OFDM waveforms. This paper proposes an OFDM-based ISAC waveform design that takes into account communication and resource occupancy constraints. The proposed method minimizes the Cram\'er-Rao Bound (CRB) on delay and Doppler estimation for two closely spaced targets. Moreover, the paper addresses the under-sampling issue by interpolating the estimated sensing channel based on matrix completion via Schatten $p$-norm approximation. Numerical results show that the proposed waveform outperforms the state-of-the-art methods.
Coherent multistatic radio imaging represents a pivotal opportunity for forthcoming wireless networks, which involves distributed nodes cooperating to achieve accurate sensing resolution and robustness. This paper delves into cooperative coherent imaging for vehicular radar networks. Herein, multiple radar-equipped vehicles cooperate to improve collective sensing capabilities and address the fundamental issue of distinguishing weak targets in close proximity to strong ones, a critical challenge for vulnerable road users protection. We prove the significant benefits of cooperative coherent imaging in the considered automotive scenario in terms of both probability of correct detection, evaluated considering several system parameters, as well as resolution capabilities, showcased by a dedicated experimental campaign wherein the collaboration between two vehicles enables the detection of the legs of a pedestrian close to a parked car. Moreover, as \textit{coherent} processing of several sensors' data requires very tight accuracy on clock synchronization and sensor's positioning -- referred to as \textit{phase synchronization} -- (such that to predict sensor-target distances up to a fraction of the carrier wavelength), we present a general three-step cooperative multistatic phase synchronization procedure, detailing the required information exchange among vehicles in the specific automotive radar context and assessing its feasibility and performance by hybrid Cram\'er-Rao bound.
This paper tackles the challenge of wideband MIMO channel estimation within indoor millimeter-wave scenarios. Our proposed approach exploits the integrated sensing and communication paradigm, where sensing information aids in channel estimation. The key innovation consists of employing both spatial and temporal sensing modes to significantly reduce the number of required training pilots. Moreover, our algorithm addresses and corrects potential mismatches between sensing and communication modes, which can arise from differing sensing and communication propagation paths. Extensive simulations demonstrate that the proposed method requires 4x less pilots compared to the current state-of-the-art, marking a substantial advancement in channel estimation efficiency.
Electromagnetic skins (EMSs) have been recently considered as a booster for wireless sensing, but their usage on mobile targets is relatively novel and could be of interest when the target reflectivity can/must be increased to improve its detection or the estimation of parameters. In particular, when illuminated by a wide-bandwidth signal (e.g., from a radar operating at millimeter waves), vehicles behave like \textit{extended targets}, since multiple parts of the vehicle's body effectively contribute to the back-scattering. Moreover, in some cases perspective deformations challenge the correct localization of the vehicle. To address these issues, we propose lodging EMSs on vehicles' roof to act as high-reflectivity planar retro-reflectors toward the sensing terminal. The advantage is twofold: \textit{(i)} by introducing a compact high-reflectivity structure on the target, we make vehicles behave like \textit{point targets}, avoiding perspective deformations and related ranging biases and \textit{(ii)} we increase the reflectivity the vehicle, improving localization performance. We detail the EMS design from the system-level to the full-wave-level considering both reconfigurable intelligent surfaces (RIS) and cost-effective static passive electromagnetic skins (SP-EMSs). Localization performance of the EMS-aided sensing system is also assessed by Cram\'er-Rao bound analysis in both narrowband and spatially wideband operating conditions.
Space-time modulated metasurfaces (STMMs) are a recently proposed generalization of reconfigurable intelligent surfaces, which include a proper time-varying phase at the metasurface elements, enabling higher flexibility and control of the reflected signals. The spatial component can be designed to control the direction of reflection, while the temporal one can be adjusted to change the frequency of the reflected signal or to convey information. However, the coupling between the spatial and temporal phases at the STMM can adversely affect its performance. Therefore, this paper analyzes the system parameters that affect the space-time coupling. Furthermore, two methods for space-time decoupling are investigated. Numerical results highlight the effectiveness of the proposed decoupling methods and reveal that the space-time phase coupling increases with the bandwidth of the temporal phase, the size of the STMM, and with grazing angles of incidence onto the STMM.
Networked sensing refers to the capability of properly orchestrating multiple sensing terminals to enhance specific figures of merit, e.g., positioning accuracy or imaging resolution. Regarding radio-based sensing, it is essential to understand \textit{when} and \textit{how} sensing terminals should be orchestrated, namely the best cooperation that trades between performance and cost (e.g., energy consumption, communication overhead, and complexity). This paper addresses networked sensing from a physics-driven perspective, aiming to provide a general theoretical benchmark to evaluate its \textit{imaging} performance bounds and to guide the sensing orchestration accordingly. Diffraction tomography theory (DTT) is the method to quantify the imaging resolution of any radio sensing experiment from inspection of its spectral (or wavenumber) content. In networked sensing, the image formation is based on the back-projection integral, valid for any network topology and physical configuration of the terminals. The \textit{wavefield networked sensing} is a framework in which multiple sensing terminals are orchestrated during the acquisition process to maximize the imaging quality (resolution and grating lobes suppression) by pursuing the deceptively simple \textit{wavenumber tessellation principle}. We discuss all the cooperation possibilities between sensing terminals and possible killer applications. Remarkably, we show that the proposed method allows obtaining high-quality images of the environment in limited bandwidth conditions, leveraging the coherent combination of multiple multi-static low-resolution images.