This paper examines the separation of wireless communication and radar signals, thereby guaranteeing cohabitation and acting as a panacea to spectrum sensing. First, considering that the channel impulse response was known by the receivers (communication and radar), we showed that the optimizing beamforming weights mitigate the interference caused by signals and improve the physical layer security (PLS) of the system. Furthermore, when the channel responses were unknown, we designed an interference filter as a low-complex noise and interference cancellation autoencoder. By mitigating the interference on the legitimate users, the PLS was guaranteed. Results showed that even for a low signal-to-noise ratio, the autoencoder produces low root-mean-square error (RMSE) values.
In this paper, we design a first of its kind transceiver (PHY layer) prototype for cloud-based audio-visual (AV) speech enhancement (SE) complying with high data rate and low latency requirements of future multimodal hearing assistive technology. The innovative design needs to meet multiple challenging constraints including up/down link communications, delay of transmission and signal processing, and real-time AV SE models processing. The transceiver includes device detection, frame detection, frequency offset estimation, and channel estimation capabilities. We develop both uplink (hearing aid to the cloud) and downlink (cloud to hearing aid) frame structures based on the data rate and latency requirements. Due to the varying nature of uplink information (audio and lip-reading), the uplink channel supports multiple data rate frame structure, while the downlink channel has a fixed data rate frame structure. In addition, we evaluate the latency of different PHY layer blocks of the transceiver for developed frame structures using LabVIEW NXG. This can be used with software defined radio (such as Universal Software Radio Peripheral) for real-time demonstration scenarios.
Large bandwidth at mm-wave is crucial for 5G and beyond but the high path loss (PL) requires highly accurate PL prediction for network planning and optimization. Statistical models with slope-intercept fit fall short in capturing large variations seen in urban canyons, whereas ray-tracing, capable of characterizing site-specific features, faces challenges in describing foliage and street clutter and associated reflection/diffraction ray calculation. Machine learning (ML) is promising but faces three key challenges in PL prediction: 1) insufficient measurement data; 2) lack of extrapolation to new streets; 3) overwhelmingly complex features/models. We propose an ML-based urban canyon PL prediction model based on extensive 28 GHz measurements from Manhattan where street clutters are modeled via a LiDAR point cloud dataset and buildings by a mesh-grid building dataset. We extract expert knowledge-driven street clutter features from the point cloud and aggressively compress 3D-building information using convolutional-autoencoder. Using a new street-by-street training and testing procedure to improve generalizability, the proposed model using both clutter and building features achieves a prediction error (RMSE) of $4.8 \pm 1.1$ dB compared to $10.6 \pm 4.4$ dB and $6.5 \pm 2.0$ dB for 3GPP LOS and slope-intercept prediction, respectively, where the standard deviation indicates street-by-street variation. By only using four most influential clutter features, RMSE of $5.5\pm 1.1$ dB is achieved.
Power consumption is one of the major issues in massive MIMO (multiple input multiple output) systems, causing increased long-term operational cost and overheating issues. In this paper, we consider per-antenna power allocation with a given finite set of power levels towards maximizing the long-term energy efficiency of the multi-user systems, while satisfying the QoS (quality of service) constraints at the end users in terms of required SINRs (signal-to-interference-plus-noise ratio), which depends on channel information. Assuming channel states to vary as a Markov process, the constraint problem is modeled as an unconstraint problem, followed by the power allocation based on Q-learning algorithm. Simulation results are presented to demonstrate the successful minimization of power consumption while achieving the SINR threshold at users.
In this work, we consider direction-of-arrival (DoA) estimation in the presence of extreme noise using Deep Learning (DL). In particular, we introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix and is able to predict angular directions using the sample covariance estimate. We model the problem as a multi-label classification task and train a CNN in the low-SNR regime to predict DoAs across all SNRs. The proposed architecture demonstrates enhanced robustness in the presence of noise, and resilience to a small number of snapshots. Moreover, it is able to resolve angles within the grid resolution. Experimental results demonstrate significant performance gains in the low-SNR regime compared to state-of-the-art methods and without the requirement of any parameter tuning. We relax the assumption that the number of sources is known a priori and present a training method, where the CNN learns to infer the number of sources jointly with the DoAs. Simulation results demonstrate that the proposed CNN can accurately estimate off-grid angles in low SNR, while at the same time the number of sources is successfully inferred for a sufficient number of snapshots. Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.