In this paper, we investigate an intelligent reflecting surface (IRS) assisted multi-user multiple-input multiple-output (MIMO) full-duplex (FD) system. We jointly optimize the active beamforming matrices at the access point (AP) and uplink users, and the passive beamforming matrix at the IRS to maximize the weighted sum-rate of the system. Since it is practically difficult to acquire the channel state information (CSI) for IRS-related links due to its passive operation and large number of elements, we conceive a mixed-timescale beamforming scheme. Specifically, the high-dimensional passive beamforming matrix at the IRS is updated based on the channel statistics while the active beamforming matrices are optimized relied on the low-dimensional real-time effective CSI at each time slot. We propose an efficient stochastic successive convex approximation (SSCA)-based algorithm for jointly designing the active and passive beamforming matrices. Moreover, due to the high computational complexity caused by the matrix inversion computation in the SSCA-based optimization algorithm, we further develop a deep-unfolding neural network (NN) to address this issue. The proposed deep-unfolding NN maintains the structure of the SSCA-based algorithm but introduces a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion. In addition, we develop a black-box NN as a benchmark. Simulation results show that the proposed mixed-timescale algorithm outperforms the existing single-timescale algorithm and the proposed deep-unfolding NN approaches the performance of the SSCA-based algorithm with much reduced computational complexity when deployed online.
In this paper, we propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, which consists of deep neural network (DNN)-aided pilot training, channel feedback, and hybrid analog-digital (HAD) precoding. Specifically, we develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter. To reduce the signaling overhead and channel state information (CSI) mismatch caused by the transmission delay, a two-timescale DNN composed of a long-term DNN and a short-term DNN is developed. The analog precoders are designed by the long-term DNN based on the CSI statistics and updated once in a frame consisting of a number of time slots. In contrast, the digital precoders are optimized by the short-term DNN at each time slot based on the estimated low-dimensional equivalent CSI matrices. A two-timescale training method is also developed for the proposed DNN with a binary layer. We then analyze the generalization ability and signaling overhead for the proposed DNN based algorithm. Simulation results show that our proposed technique significantly outperforms conventional schemes in terms of bit-error rate performance with reduced signaling overhead and shorter pilot sequences.
Generative adversarial networks (GANs) are emerging machine learning models for generating synthesized data similar to real data by jointly training a generator and a discriminator. In many applications, data and computational resources are distributed over many devices, so centralized computation with all data in one location is infeasible due to privacy and/or communication constraints. This paper proposes a new framework for training GANs in a distributed fashion: Each device computes a local discriminator using local data; a single server aggregates their results and computes a global GAN. Specifically, in each iteration, the server sends the global GAN to the devices, which then update their local discriminators; the devices send their results to the server, which then computes their average as the global discriminator and updates the global generator accordingly. Two different update schedules are designed with different levels of parallelism between the devices and the server. Numerical results obtained using three popular datasets demonstrate that the proposed framework can outperform a state-of-the-art framework in terms of convergence speed.
Graph neural network (GNN) is an efficient neural network model for graph data and is widely used in different fields, including wireless communications. Different from other neural network models, GNN can be implemented in a decentralized manner with information exchanges among neighbors, making it a potentially powerful tool for decentralized control in wireless communication systems. The main bottleneck, however, is wireless channel impairments that deteriorate the prediction robustness of GNN. To overcome this obstacle, we analyze and enhance the robustness of the decentralized GNN in different wireless communication systems in this paper. Specifically, using a GNN binary classifier as an example, we first develop a methodology to verify whether the predictions are robust. Then, we analyze the performance of the decentralized GNN binary classifier in both uncoded and coded wireless communication systems. To remedy imperfect wireless transmission and enhance the prediction robustness, we further propose novel retransmission mechanisms for the above two communication systems, respectively. Through simulations on the synthetic graph data, we validate our analysis, verify the effectiveness of the proposed retransmission mechanisms, and provide some insights for practical implementation.
The millimeter wave (mmWave) multiuser multiple-input multiple-output (MU-MIMO) systems with discrete lens arrays (DLA) have received great attention due to their simple hardware implementation and excellent performance. In this work, we investigate the joint design of beam selection and digital precoding matrices for mmWave MU-MIMO systems with DLA to maximize the sum-rate subject to the transmit power constraint and the constraints of the selection matrix structure. The investigated non-convex problem with discrete variables and coupled constraints is challenging to solve and an efficient framework of joint neural network (NN) design is proposed to tackle it. Specifically, the proposed framework consists of a deep reinforcement learning (DRL)-based NN and a deep-unfolding NN, which are employed to optimize the beam selection and digital precoding matrices, respectively. As for the DRL-based NN, we formulate the beam selection problem as a Markov decision process and a double deep Q-network algorithm is developed to solve it. The base station is considered to be an agent, where the state, action, and reward function are carefully designed. Regarding the design of the digital precoding matrix, we develop an iterative weighted minimum mean-square error algorithm induced deep-unfolding NN, which unfolds this algorithm into a layerwise structure with introduced trainable parameters. Simulation results verify that this jointly trained NN remarkably outperforms the existing iterative algorithms with reduced complexity and stronger robustness.
The combinatorial auction (CA) is an efficient mechanism for resource allocation in different fields, including cloud computing. It can obtain high economic efficiency and user flexibility by allowing bidders to submit bids for combinations of different items instead of only for individual items. However, the problem of allocating items among the bidders to maximize the auctioneers" revenue, i.e., the winner determination problem (WDP), is NP-complete to solve and inapproximable. Existing works for WDPs are generally based on mathematical optimization techniques and most of them focus on the single-unit WDP, where each item only has one unit. On the contrary, few works consider the multi-unit WDP in which each item may have multiple units. Given that the multi-unit WDP is more complicated but prevalent in cloud computing, we propose leveraging machine learning (ML) techniques to develop a novel low-complexity algorithm for solving this problem with negligible revenue loss. Specifically, we model the multi-unit WDP as an augmented bipartite bid-item graph and use a graph neural network (GNN) with half-convolution operations to learn the probability of each bid belonging to the optimal allocation. To improve the sample generation efficiency and decrease the number of needed labeled instances, we propose two different sample generation processes. We also develop two novel graph-based post-processing algorithms to transform the outputs of the GNN into feasible solutions. Through simulations on both synthetic instances and a specific virtual machine (VM) allocation problem in a cloud computing platform, we validate that our proposed method can approach optimal performance with low complexity and has good generalization ability in terms of problem size and user-type distribution.
Optimization theory assisted algorithms have received great attention for precoding design in multiuser multiple-input multiple-output (MU-MIMO) systems. Although the resultant optimization algorithms are able to provide excellent performance, they generally require considerable computational complexity, which gets in the way of their practical application in real-time systems. In this work, in order to address this issue, we first propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed in matrix form to better solve the problems in communication systems. Then, we implement the proposed deepunfolding framework to solve the sum-rate maximization problem for precoding design in MU-MIMO systems. An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed. Specifically, the iterative WMMSE algorithm is unfolded into a layer-wise structure, where a number of trainable parameters are introduced to replace the highcomplexity operations in the forward propagation. To train the network, a generalized chain rule of the IAIDNN is proposed to depict the recurrence relation of gradients between two adjacent layers in the back propagation. Moreover, we discuss the computational complexity and generalization ability of the proposed scheme. Simulation results show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
In cellular federated edge learning (FEEL), multiple edge devices holding local data jointly train a learning algorithm by communicating learning updates with an access point without exchanging their data samples. With limited communication resources, it is beneficial to schedule the most informative local learning update. In this paper, a novel scheduling policy is proposed to exploit both diversity in multiuser channels and diversity in the importance of the edge devices' learning updates. First, a new probabilistic scheduling framework is developed to yield unbiased update aggregation in FEEL. The importance of a local learning update is measured by gradient divergence. If one edge device is scheduled in each communication round, the scheduling policy is derived in closed form to achieve the optimal trade-off between channel quality and update importance. The probabilistic scheduling framework is then extended to allow scheduling multiple edge devices in each communication round. Numerical results obtained using popular models and learning datasets demonstrate that the proposed scheduling policy can achieve faster model convergence and higher learning accuracy than conventional scheduling policies that only exploit a single type of diversity.
Generalized Benders decomposition (GBD) is a globally optimal algorithm for mixed integer nonlinear programming (MINLP) problems, which are NP-hard and can be widely found in the area of wireless resource allocation. The main idea of GBD is decomposing an MINLP problem into a primal problem and a master problem, which are iteratively solved until their solutions converge. However, a direct implementation of GBD is time- and memory-consuming. The main bottleneck is the high complexity of the master problem, which increases over the iterations. Therefore, we propose to leverage machine learning (ML) techniques to accelerate GBD aiming at decreasing the complexity of the master problem. Specifically, we utilize two different ML techniques, classification and regression, to deal with this acceleration task. In this way, a cut classifier and a cut regressor are learned, respectively, to distinguish between useful and useless cuts. Only useful cuts are added to the master problem and thus the complexity of the master problem is reduced. By using a resource allocation problem in device-to-device communication networks as an example, we validate that the proposed method can reduce the computational complexity of GBD without loss of optimality and has strong generalization ability. The proposed method is applicable for solving various MINLP problems in wireless networks since the designs are invariant for different problems.
It has been a long-held belief that judicious resource allocation is critical to mitigating interference, improving network efficiency, and ultimately optimizing wireless communication performance. The traditional wisdom is to explicitly formulate resource allocation as an optimization problem and then exploit mathematical programming to solve the problem to a certain level of optimality. Nonetheless, as wireless networks become increasingly diverse and complex, e.g., the high-mobility vehicular networks, the current design methodologies face significant challenges and thus call for rethinking of the traditional design philosophy. Meanwhile, deep learning, with many success stories in various disciplines, represents a promising alternative due to its remarkable power to leverage data for problem solving. In this paper, we discuss the key motivations and roadblocks of using deep learning for wireless resource allocation with application to vehicular networks. We review major recent studies that mobilize the deep learning philosophy in wireless resource allocation and achieve impressive results. We first discuss deep learning assisted optimization for resource allocation. We then highlight the deep reinforcement learning approach to address resource allocation problems that are difficult to handle in the traditional optimization framework. We also identify some research directions that deserve further investigation.