Channel state information (CSI) at transmitter is crucial for massive MIMO downlink systems to achieve high spectrum and energy efficiency. Existing works have provided deep learning architectures for CSI feedback and recovery at the eNB/gNB by reducing user feedback overhead and improving recovery accuracy. However, existing DL architectures tend to be inflexible and non-scalable as models are often trained according to a preset number of antennas for a given compression ratio. In this work, we develop a flexible and scalable learning framework based on a divide-and-conquer approach (DCA). This new DCA architecture can flexibly accommodate different numbers of 3GPP antenna ports and dynamic levels of feedback compression. Importantly, it also significantly reduces computational complexity and memory size by allowing UEs to feedback segmented downlink CSI. We further propose a multi-rate successive convolution encoder with fewer than 1000 parameters. Test results demonstrate superior performance, good scalability, and low complexity for both indoor and outdoor channels.
Wireless links using massive MIMO transceivers are vital for next generation wireless communications networks networks. Precoding in Massive MIMO transmission requires accurate downlink channel state information (CSI). Many recent works have effectively applied deep learning (DL) to jointly train UE-side compression networks for delay domain CSI and a BS-side decoding scheme. Vitally, these works assume that the full delay domain CSI is available at the UE, but in reality, the UE must estimate the delay domain based on a limited number of frequency domain pilots. In this work, we propose a linear pilot-to-delay (P2D) estimator that transforms sparse frequency pilots to the truncated delay CSI. We show that the P2D estimator is accurate under frequency downsampling, and we demonstrate that the P2D estimate can be effectively utilized with existing autoencoder-based CSI estimation networks. In addition to accounting for pilot-based estimates of downlink CSI, we apply unrolled optimization networks to emulate iterative solutions to compressed sensing (CS), and we demonstrate better estimation performance than prior autoencoder-based DL networks. Finally, we investigate the efficacy of trainable CS networks for in a differential encoding network for time-varying CSI estimation, and we propose a new network, MarkovNet-ISTA-ENet, comprised of both a CS network for initial CSI estimation and multiple autoencoders to estimate the error terms. We demonstrate that this heterogeneous network has better asymptotic performance than networks comprised of only one type of network.
Accurate estimation of DL CSI is required to achieve high spectrum and energy efficiency in massive MIMO systems. Previous works have developed learning-based CSI feedback framework within FDD systems for efficient CSI encoding and recovery with demonstrated benefits. However, downlink pilots for CSI estimation by receiving terminals may occupy excessively large number of resource elements for massive number of antennas and compromise spectrum efficiency. To overcome this problem, we propose a new learning-based feedback architecture for efficient encoding of partial CSI feedback of interleaved non-overlapped antenna subarrays by exploiting CSI temporal correlation. For ease of encoding, we further design an IFFT approach to decouple partial CSI of antenna subarrays and to preserve partial CSI sparsity. Our results show superior performance in indoor/outdoor scenarios by the proposed model for CSI recovery at significantly reduced computation power and storage needs.
Hyperspectral imaging is an important sensing technology with broad applications and impact in areas including environmental science, weather, and geo/space exploration. One important task of hyperspectral image (HSI) processing is the extraction of spectral-spatial features. Leveraging on the recent-developed graph signal processing over multilayer networks (M-GSP), this work proposes several approaches to HSI segmentation based on M-GSP feature extraction. To capture joint spectral-spatial information, we first customize a tensor-based multilayer network (MLN) model for HSI, and define a MLN singular space for feature extraction. We then develop an unsupervised HSI segmentation method by utilizing MLN spectral clustering. Regrouping HSI pixels via MLN-based clustering, we further propose a semi-supervised HSI classification based on multi-resolution fusions of superpixels. Our experimental results demonstrate the strength of M-GSP in HSI processing and spectral-spatial information extraction.
This work introduces a tensor-based framework of graph signal processing over multilayer networks (M-GSP) to analyze high-dimensional signal interactions. Following Part I's introduction of fundamental definitions and spectrum properties of M-GSP, this second Part focuses on more detailed discussions of implementation and applications of M-GSP. Specifically, we define the concepts of stationary process, convolution, bandlimited signals, and sampling theory over multilayer networks. We also develop fundamentals of filter design and derive approximated methods of spectrum estimation within the proposed framework. For practical applications, we further present several MLN-based methods for signal processing and data analysis. Our experimental results demonstrate significant performance improvement using our M-GSP framework over traditional signal processing solutions.
Signal processing over single-layer graphs has become a mainstream tool owing to its power in revealing obscure underlying structures within data signals. For generally, many real-life datasets and systems are characterized by more complex interactions among distinct entities. Such complex interactions may represent multiple levels of interactions that are difficult to be modeled with a single layer graph and can instead be captured by multiple layers of graph connections. Such multilayer/multi-level data structure can be more naturally modeled and captured by a high-dimensional multi-layer network (MLN). This work generalizes traditional graph signal processing (GSP) over multilayer networks for analysis of such multilayer signal features and their interactions. We propose a tensor-based framework of this multilayer network signal processing (M-GSP) in this two-part series. Specially, Part I introduces the fundamentals of M-GSP and studies spectrum properties of MLN Fourier space. We further describe its connections to traditional digital signal processing and GSP. Part II focuses on several major tools within the M-GSP framework for signal processing and data analysis. We provide results to demonstrate the efficacy and benefits of applying multilayer networks and the M-GSP in practical scenarios.
In this letter, we propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES). Specially, the model updates for all the tasks are transmitted and superpositioned concurrently over a non-orthogonal uplink channel via over-the-air computation, and the aggregation results of all the tasks are reconstructed at the ES through an extended version of the turbo compressed sensing algorithm. Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
Massive multiple-input and multiple-output (MIMO) enables ultra-high throughput and low latency for tile-based adaptive virtual reality (VR) 360 video transmission in wireless network. In this paper, we consider a massive MIMO system where multiple users in a single-cell theater watch an identical VR 360 video. Based on tile prediction, base station (BS) deliveries the tiles in predicted field of view (FoV) to users. By introducing practical supplementary transmission for missing tiles and unacceptable VR sickness, we propose the first stable transmission scheme for VR video. we formulate an integer non-linear programming (INLP) problem to maximize users' average quality of experience (QoE) score. Moreover, we derive the achievable spectral efficiency (SE) expression of predictive tile groups and the approximately achievable SE expression of missing tile groups, respectively. Analytically, the overall throughput is related to the number of tile groups and the length of pilot sequences. By exploiting the relationship between the structure of viewport tiles and SE expression, we propose a multi-lattice multi-stream grouping method aimed at improving the overall throughput for VR video transmission. Moreover, we analyze the relationship between QoE objective and number of predictive tile. We transform the original INLP problem into an integer linear programming problem by setting the predictive tiles groups as some constants. With variable relaxation and recovery, we obtain the optimal average QoE. Extensive simulation results validate that the proposed algorithm effectively improves QoE.