6G networks will be required to support higher data rates, improved energy efficiency, lower latency, and more diverse users compared with 5G systems. To meet these requirements, extremely large antenna arrays and high-frequency signaling are envisioned to be key physical-layer technologies. The deployment of extremely large antenna arrays, especially in high-frequency bands, indicates that future 6G wireless networks are likely to operate in the radiating near-field (Fresnel) region, as opposed to the traditional far-field operation of current wireless technologies. In this article, we discuss the opportunities and challenges that arise in radiating near-field communications. We begin by discussing the key physical characteristics of near-field communications, where the standard plane-wave propagation assumption no longer holds, and clarify its implication on the modelling of wireless channels. Then, we elaborate on the ability to leverage spherical wavefronts via beam focusing, highlighting its advantages for 6G systems. We point out several appealing application scenarios which, with proper design, can benefit from near-field operation, including interference mitigation in multi-user communications, accurate localization and focused sensing, as well as wireless power transfer with minimal energy pollution. We conclude with discussing some of the design challenges and research directions that are yet to be explored to fully harness the potential of this emerging paradigm.
As radio-frequency (RF) antenna, component and processing capabilities increase, the ability to perform multiple RF system functions from a common aperture is being realized. Conducting both radar and communications from the same system is potentially useful in vehicular, health monitoring, and surveillance settings. This paper considers multiple-input-multiple-output (MIMO) dual-function radar-communication (DFRC) systems in which the radar and communication modes use distinct baseband waveforms. A transmit precoder provides spatial multiplexing and power allocation among the radar and communication modes. Multiple precoder design approaches are introduced for a radar detection mode in which a total search volume is divided into dwells to be searched sequentially. The approaches are designed to enforce a reliance on radar waveforms for sensing purposes, yielding improved approximation of desired ambiguity functions over prior methods found in the literature. The methods are also shown via simulation to enable design flexibility, allowing for prioritization of either subsystem and specification of a desired level of radar or communication performance.
Reconfigurable Intelligent Surfaces (RISs) are envisioned to play a key role in future wireless communications, enabling programmable radio propagation environments. They are usually considered as nearly passive planar structures that operate as adjustable reflectors, giving rise to a multitude of implementation challenges, including an inherent difficulty in estimating the underlying wireless channels. In this paper, we propose the concept of Hybrid RISs (HRISs), which do not solely reflect the impinging waveform in a controllable fashion, but are also capable of sensing and processing a portion of it via some active reception elements. We first present implementation details for this novel metasurface architecture and propose a simple model for its operation, when considered for wireless communications. As an indicative application of HRISs, we formulate and solve the individual channels identification problem for the uplink of multi-user HRIS-empowered systems. Our numerical results showcase that, in the high signal-to-noise regime, HRISs enable individual channel estimation with notably reduced amounts of pilots, compared to those needed when using a purely reflective RIS that can only estimate the cascaded channel.
In this work, we consider the acquisition of stationary signals using uniform analog-to-digital converters (ADCs), i.e., employing uniform sampling and scalar uniform quantization. We jointly optimize the pre-sampling and reconstruction filters to minimize the time-averaged mean-squared error (TMSE) in recovering the continuous-time input signal for a fixed sampling rate and quantizer resolution and obtain closed-form expressions for the minimal achievable TMSE. We show that the TMSE-minimizing pre-sampling filter omits aliasing and discards weak frequency components to resolve the remaining ones with higher resolution when the rate budget is small. In our numerical study, we validate our results and show that sub-Nyquist sampling often minimizes the TMSE under tight rate budgets at the output of the ADC.
Analog-to-digital converters (ADCs) allow physical signals to be processed using digital hardware. Their conversion consists of two stages: Sampling, which maps a continuous-time signal into discrete-time, and quantization, i.e., representing the continuous-amplitude quantities using a finite number of bits. ADCs typically implement generic uniform conversion mappings that are ignorant of the task for which the signal is acquired, and can be costly when operating in high rates and fine resolutions. In this work we design task-oriented ADCs which learn from data how to map an analog signal into a digital representation such that the system task can be efficiently carried out. We propose a model for sampling and quantization that facilitates the learning of non-uniform mappings from data. Based on this learnable ADC mapping, we present a mechanism for optimizing a hybrid acquisition system comprised of analog combining, tunable ADCs with fixed rates, and digital processing, by jointly learning its components end-to-end. Then, we show how one can exploit the representation of hybrid acquisition systems as deep network to optimize the sampling rate and quantization rate given the task by utilizing Bayesian meta-learning techniques. We evaluate the proposed deep task-based ADC in two case studies: the first considers symbol detection in multi-antenna digital receivers, where multiple analog signals are simultaneously acquired in order to recover a set of discrete information symbols. The second application is the beamforming of analog channel data acquired in ultrasound imaging. Our numerical results demonstrate that the proposed approach achieves performance which is comparable to operating with high sampling rates and fine resolution quantization, while operating with reduced overall bit rate.
Plane Wave imaging enables many applications that require high frame rates, including localisation microscopy, shear wave elastography, and ultra-sensitive Doppler. To alleviate the degradation of image quality with respect to conventional focused acquisition, typically, multiple acquisitions from distinctly steered plane waves are coherently (i.e. after time-of-flight correction) compounded into a single image. This poses a trade-off between image quality and achievable frame-rate. To that end, we propose a new deep learning approach, derived by formulating plane wave compounding as a linear inverse problem, that attains high resolution, high-contrast images from just 3 plane wave transmissions. Our solution unfolds the iterations of a proximal gradient descent algorithm as a deep network, thereby directly exploiting the physics-based generative acquisition model into the neural network design. We train our network in a greedy manner, i.e. layer-by-layer, using a combination of pixel, temporal, and distribution (adversarial) losses to achieve both perceptual fidelity and data consistency. Through the strong model-based inductive bias, the proposed architecture outperforms several standard benchmark architectures in terms of image quality, with a low computational and memory footprint.
In this paper, we propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, which consists of deep neural network (DNN)-aided pilot training, channel feedback, and hybrid analog-digital (HAD) precoding. Specifically, we develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter. To reduce the signaling overhead and channel state information (CSI) mismatch caused by the transmission delay, a two-timescale DNN composed of a long-term DNN and a short-term DNN is developed. The analog precoders are designed by the long-term DNN based on the CSI statistics and updated once in a frame consisting of a number of time slots. In contrast, the digital precoders are optimized by the short-term DNN at each time slot based on the estimated low-dimensional equivalent CSI matrices. A two-timescale training method is also developed for the proposed DNN with a binary layer. We then analyze the generalization ability and signaling overhead for the proposed DNN based algorithm. Simulation results show that our proposed technique significantly outperforms conventional schemes in terms of bit-error rate performance with reduced signaling overhead and shorter pilot sequences.
We consider the use of deep learning for parameter estimation. We propose Bias Constrained Estimators (BCE) that add a squared bias term to the standard mean squared error (MSE) loss. The main motivation to BCE is learning to estimate deterministic unknown parameters with no Bayesian prior. Unlike standard learning based estimators that are optimal on average, we prove that BCEs converge to Minimum Variance Unbiased Estimators (MVUEs). We derive closed form solutions to linear BCEs. These provide a flexible bridge between linear regrssion and the least squares method. In non-linear settings, we demonstrate that BCEs perform similarly to MVUEs even when the latter are computationally intractable. A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance. Examples include distributed sensor networks and data augmentation in test-time. In such applications, unbiasedness is a necessary condition for asymptotic consistency.
Graph signals arise in various applications, ranging from sensor networks to social media data. The high-dimensional nature of these signals implies that they often need to be compressed in order to be stored and transmitted. The common framework for graph signal compression is based on sampling, resulting in a set of continuous-amplitude samples, which in turn have to be quantized into a finite bit representation. In this work we study the joint design of graph signal sampling along with quantization, for graph signal compression. We focus on bandlimited graph signals, and show that the compression problem can be represented as a task-based quantization setup, in which the task is to recover the spectrum of the signal. Based on this equivalence, we propose a joint design of the sampling and recovery mechanisms for a fixed quantization mapping, and present an iterative algorithm for dividing the available bit budget among the discretized samples. Furthermore, we show how the proposed approach can be realized using graph filters combining elements corresponding the neighbouring nodes of the graph, thus facilitating distributed implementation at reduced complexity. Our numerical evaluations on both synthetic and real world data shows that the joint sampling and quantization method yields a compact finite bit representation of high-dimensional graph signals, which allows reconstruction of the original signal with accuracy within a small gap of that achievable with infinite resolution quantizers.
In this paper, we consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications. Specifically, we treat sensing problems with model mismatch where one wishes to recover a sparse high-dimensional vector from low-dimensional observations subject to uncertainty in the measurement operator. We then design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem. Our proposed network - named Robust lEarned Shrinkage-Thresholding (REST) - exhibits an additional normalization processing compared to Learned Iterative Shrinkage-Thresholding Algorithm (LISTA), leading to reliable recovery of the signal under sample-wise varying model mismatch. The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems wherein model mismatch is taken into consideration.