In this paper, a new semi-supervised deep MIMO detection approach using cycle-consistent generative adversarial network (cycleGAN) is proposed, which performs the detection without any prior knowledge of underlying channel models. Specifically, we propose the cycleGAN detector by constructing a bidirectional loop of least squares generative adversarial networks (LS-GAN). The forward direction of the loop learns to model the transmission process, while the backward direction learns to detect the transmitted signals. By optimizing the cycle-consistency of the transmitted and received signals through this loop, the proposed method is able to train a specific detector block-by-block to fit the operating channel. The training is conducted online, including a supervised phase using the pilots and an unsupervised phase using the received payload data. This semi-supervised training strategy weakens the demand for the scale of labelled training dataset, which is related to the number of pilots, and thus the overhead is effectively reduced. Numerical results show that the proposed semi-blind cycleGAN detector achieves better bit error-rate (BER) performance than existing semi-blind deep learning detection methods as well as conditional linear detectors, especially when nonlinear distortion of the power amplifiers at the transmitter is considered.
Accurate segmentation of Anatomical brain Barriers to Cancer spread (ABCs) plays an important role for automatic delineation of Clinical Target Volume (CTV) of brain tumors in radiotherapy. Despite that variants of U-Net are state-of-the-art segmentation models, they have limited performance when dealing with ABCs structures with various shapes and sizes, especially thin structures (e.g., the falx cerebri) that span only few slices. To deal with this problem, we propose a High and Multi-Resolution Network (HMRNet) that consists of a multi-scale feature learning branch and a high-resolution branch, which can maintain the high-resolution contextual information and extract more robust representations of anatomical structures with various scales. We further design a Bidirectional Feature Calibration (BFC) block to enable the two branches to generate spatial attention maps for mutual feature calibration. Considering the different sizes and positions of ABCs structures, our network was applied after a rough localization of each structure to obtain fine segmentation results. Experiments on the MICCAI 2020 ABCs challenge dataset showed that: 1) Our proposed two-stage segmentation strategy largely outperformed methods segmenting all the structures in just one stage; 2) The proposed HMRNet with two branches can maintain high-resolution representations and is effective to improve the performance on thin structures; 3) The proposed BFC block outperformed existing attention methods using monodirectional feature calibration. Our method won the second place of ABCs 2020 challenge and has a potential for more accurate and reasonable delineation of CTV of brain tumors.
To reap the promised gain achieved by distributed reconfigurable intelligent surfaces (RISs)-enhanced communications in a wireless network, timing synchronization among these metasurfaces is an essential prerequisite in practice. This paper proposes a unified framework for the joint estimation of the unknown timing offsets and the RIS channel parameters, as well as the design of cooperative reflection and synchronization algorithm for the distributed multiple-RIS communication. Considering that RIS is usually a passive device with limited capability of signal processing, the individual timing offset and channel gains of each hop of the RIS links cannot be directly estimated. To make the estimation tractable, we propose to estimate the cascaded channels and timing offsets jointly by deriving a maximum likelihood estimator. Furthermore, we theoretically characterize the Cramer-Rao lower bound (CRLB) to evaluate the accuracy of this estimator. By using the proposed estimator and the derived CRLBs, an efficient resynchronization algorithm is devised jointly at the RISs and the destination to compensate the multiple timing offsets. Based on the majorization-minimization framework, the proposed algorithm admits semi-closed and closed form solutions for the RIS reflection matrices and the timing offset equalizer, respectively. Simulation results verify that our theoretical analysis well matches the numerical tests and validate the effectiveness of the proposed resynchronization algorithm.
This paper investigates the optimization of beamforming design in a system with integrated sensing and communication (ISAC), where the base station (BS) sends signals for simultaneous multiuser communication and radar sensing. We aim at maximizing the energy efficiency (EE) of the multiuser communication while guaranteeing the sensing requirement in terms of individual radar beampattern gains. The problem is a complicated nonconvex fractional program which is challenging to be solved. By appropriately reformulating the problem and then applying the techniques of successive convex approximation (SCA) and semidefinite relaxation (SDR), we propose an iterative algorithm to address this problem. In theory, we prove that the introduced relaxation of the SDR is rigorously tight. Numerical results validate the effectiveness of the proposed algorithm.
In order to fully exploit the advantages of massive multiple-input multiple-output (mMIMO), it is critical for the transmitter to accurately acquire the channel state information (CSI). Deep learning (DL)-based methods have been proposed for CSI compression and feedback to the transmitter. Although most existing DL-based methods consider the CSI matrix as an image, structural features of the CSI image are rarely exploited in neural network design. As such, we propose a model of self-information that dynamically measures the amount of information contained in each patch of a CSI image from the perspective of structural features. Then, by applying the self-information model, we propose a model-and-data-driven network for CSI compression and feedback, namely IdasNet. The IdasNet includes the design of a module of self-information deletion and selection (IDAS), an encoder of informative feature compression (IFC), and a decoder of informative feature recovery (IFR). In particular, the model-driven module of IDAS pre-compresses the CSI image by removing informative redundancy in terms of the self-information. The encoder of IFC then conducts feature compression to the pre-compressed CSI image and generates a feature codeword which contains two components, i.e., codeword values and position indices of the codeword values. Subsequently, the IFR decoder decouples the codeword values as well as position indices to recover the CSI image. Experimental results verify that the proposed IdasNet noticeably outperforms existing DL-based networks under various compression ratios while it has the number of network parameters reduced by orders-of-magnitude compared with various existing methods.
Hybrid precoding is a cost-efficient technique for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) communications. This paper proposes a deep learning approach by using a distributed neural network for hybrid analog-and-digital precoding design with limited feedback. The proposed distributed neural precoding network, called DNet, is committed to achieving two objectives. First, the DNet realizes channel state information (CSI) compression with a distributed architecture of neural networks, which enables practical deployment on multiple users. Specifically, this neural network is composed of multiple independent sub-networks with the same structure and parameters, which reduces both the number of training parameters and network complexity. Secondly, DNet learns the calculation of hybrid precoding from reconstructed CSI from limited feedback. Different from existing black-box neural network design, the DNet is specifically designed according to the data form of the matrix calculation of hybrid precoding. Simulation results show that the proposed DNet significantly improves the performance up to nearly 50% compared to traditional limited feedback precoding methods under the tests with various CSI compression ratios.
In this paper, we consider a multiuser mobile edge computing (MEC) system, where a mixed-integer offloading strategy is used to assist the resource assignment for task offloading. Although the conventional branch and bound (BnB) approach can be applied to solve this problem, a huge burden of computational complexity arises which limits the application of BnB. To address this issue, we propose an intelligent BnB (IBnB) approach which applies deep learning (DL) to learn the pruning strategy of the BnB approach. By using this learning scheme, the structure of the BnB approach ensures near-optimal performance and meanwhile DL-based pruning strategy significantly reduces the complexity. Numerical results verify that the proposed IBnB approach achieves optimal performance with complexity reduced by over 80%.
A major challenge in image segmentation is classifying object boundaries. Recent efforts propose to refine the segmentation result with boundary masks. However, models are still prone to misclassifying boundary pixels even when they correctly capture the object contours. In such cases, even a perfect boundary map is unhelpful for segmentation refinement. In this paper, we argue that assigning proper prior weights to error-prone pixels such as object boundaries can significantly improve the segmentation quality. Specifically, we present the \textit{pixel null model} (PNM), a prior model that weights each pixel according to its probability of being correctly classified by a random segmenter. Empirical analysis shows that PNM captures the misclassification distribution of different state-of-the-art (SOTA) segmenters. Extensive experiments on semantic, instance, and panoptic segmentation tasks over three datasets (Cityscapes, ADE20K, MS COCO) confirm that PNM consistently improves the segmentation quality of most SOTA methods (including the vision transformers) and outperforms boundary-based methods by a large margin. We also observe that the widely-used mean IoU (mIoU) metric is insensitive to boundaries of different sharpness. As a byproduct, we propose a new metric, \textit{PNM IoU}, which perceives the boundary sharpness and better reflects the model segmentation performance in error-prone regions.
Precoding design exploiting deep learning methods has been widely studied for multiuser multiple-input multiple-output (MU-MIMO) systems. However, conventional neural precoding design applies black-box-based neural networks which are less interpretable. In this paper, we propose a deep learning-based precoding method based on an interpretable design of a neural precoding network, namely iPNet. In particular, the iPNet mimics the classic minimum mean-squared error (MMSE) precoding and approximates the matrix inversion in the design of the neural network architecture. Specifically, the proposed iPNet consists of a model-driven component network, responsible for augmenting the input channel state information (CSI), and a data-driven sub-network, responsible for precoding calculation from this augmented CSI. The latter data-driven module is explicitly interpreted as an unsupervised learner of the MMSE precoder. Simulation results show that by exploiting the augmented CSI, the proposed iPNet achieves noticeable performance gain over existing black-box designs and also exhibits enhanced generalizability against CSI mismatches.
To achieve accurate and robust pose estimation in Simultaneous Localization and Mapping (SLAM) task, multi-sensor fusion is proven to be an effective solution and thus provides great potential in robotic applications. This paper proposes FAST-LIVO, a fast LiDAR-Inertial-Visual Odometry system, which builds on two tightly-coupled and direct odometry subsystems: a VIO subsystem and a LIO subsystem. The LIO subsystem registers raw points (instead of feature points on e.g., edges or planes) of a new scan to an incrementally-built point cloud map. The map points are additionally attached with image patches, which are then used in the VIO subsystem to align a new image by minimizing the direct photometric errors without extracting any visual features (e.g., ORB or FAST corner features). To further improve the VIO robustness and accuracy, a novel outlier rejection method is proposed to reject unstable map points that lie on edges or are occluded in the image view. Experiments on both open data sequences and our customized device data are conducted. The results show our proposed system outperforms other counterparts and can handle challenging environments at reduced computation cost. The system supports both multi-line spinning LiDARs and emerging solid-state LiDARs with completely different scanning patterns, and can run in real-time on both Intel and ARM processors. We open source our code and dataset of this work on Github to benefit the robotics community.