To achieve communication-efficient federated multitask learning (FMTL), we propose an over-the-air FMTL (OAFMTL) framework, where multiple learning tasks deployed on edge devices share a non-orthogonal fading channel under the coordination of an edge server (ES). In OA-FMTL, the local updates of edge devices are sparsified, compressed, and then sent over the uplink channel in a superimposed fashion. The ES employs over-the-air computation in the presence of intertask interference. More specifically, the model aggregations of all the tasks are reconstructed from the channel observations concurrently, based on a modified version of the turbo compressed sensing (Turbo-CS) algorithm (named as M-Turbo-CS). We analyze the performance of the proposed OA-FMTL framework together with the M-Turbo-CS algorithm. Furthermore, based on the analysis, we formulate a communication-learning optimization problem to improve the system performance by adjusting the power allocation among the tasks at the edge devices. Numerical simulations show that our proposed OAFMTL effectively suppresses the inter-task interference, and achieves a learning performance comparable to its counterpart with orthogonal multi-task transmission. It is also shown that the proposed inter-task power allocation optimization algorithm substantially reduces the overall communication overhead by appropriately adjusting the power allocation among the tasks.
In this letter, we extend the sparse Kronecker-product (SKP) coding scheme, originally designed for the additive white Gaussian noise (AWGN) channel, to multiple input multiple output (MIMO) unsourced random access (URA). With the SKP coding adopted for MIMO transmission, we develop an efficient Bayesian iterative receiver design to solve the intended challenging trilinear factorization problem. Numerical results show that the proposed design outperforms the existing counterparts, and that it performs well in all simulated settings with various antenna sizes and active-user numbers.
Recently, federated learning (FL), which replaces data sharing with model sharing, has emerged as an efficient and privacy-friendly paradigm for machine learning (ML). A main challenge of FL is its huge uplink communication cost. In this paper, we tackle this challenge from an information-theoretic perspective. Specifically, we put forth a distributed source coding (DSC) framework for FL uplink, which unifies the encoding, transmission, and aggregation of the local updates as a lossy DSC problem, thus providing a systematic way to exploit the correlation between local updates to improve the uplink efficiency. Under this DSC-FL framework, we propose an FL uplink scheme based on the modified Berger-Tung coding (MBTC), which supports separate encoding and joint decoding by modifying the achievability scheme of the Berger-Tung inner bound. The achievable region of the MBTC-based uplink scheme is also derived. To unleash the potential of the MBTC-based FL scheme, we carry out a convergence analysis and then formulate a convergence rate maximization problem to optimize the parameters of MBTC. To solve this problem, we develop two algorithms, respectively for small- and large-scale FL systems, based on the majorization-minimization (MM) technique. Numerical results demonstrate the superiority of the MBTC-based FL scheme in terms of aggregation distortion, convergence performance, and communication cost, revealing the great potential of the DSC-FL framework.
Unmanned aerial vehicle (UAV) and reconfigurable intelligent surface (RIS) have been recently applied in the field of mobile edge computing (MEC) to improve the data exchange environment by proactively changing the wireless channels through maneuverable location deployment and intelligent signals reflection, respectively. Nevertheless, they may suffer from inherent limitations in practical scenarios. UAV-mounted RIS (U-RIS), as a promising integrated approach, can combine the advantages of UAV and RIS to break the limit. Inspired by this, we consider a novel U-RIS assisted MEC system, where a U-RIS is deployed to assist the communication between the ground users and an MEC server. The joint UAV trajectory, RIS passive beamforming and MEC resource allocation design is developed to maximize the energy efficiency (EE) of the system. To tackle the intractable non-convex problem, we divide it into two subproblems and solve them iteratively based on successive convex approximation (SCA) and the Dinkelbach method. Finally we obtain a high-performance suboptimal solution. Simulation results show that the proposed algorithm significantly improves the energy efficiency of the MEC system.
In next-generation wireless networks, reconfigurable intelligent surface (RIS)-assisted multiple-input multiple-output (MIMO) systems are foreseeable to support a large number of antennas at the transceiver as well as a large number of reflecting elements at the RIS. To fully unleash the potential of RIS, the phase shifts of RIS elements should be carefully designed, resulting in a high-dimensional non-convex optimization problem that is hard to solve with affordable computational complexity. In this paper, we address this scalability issue by partitioning RIS into sub-surfaces, so as to optimize the phase shifts in sub-surface levels to reduce complexity. Specifically, each sub-surface employs a linear phase variation structure to anomalously reflect the incident signal to a desired direction, and the sizes of sub-surfaces can be adaptively adjusted according to channel conditions. We formulate the achievable rate maximization problem by jointly optimizing the transmit covariance matrix and the RIS phase shifts. Then, we characterize the asymptotic behavior of the system with an infinitely large number of transceiver antennas and RIS elements. The asymptotic analysis provides useful insights on the understanding of the fundamental performance-complexity tradeoff in RIS partitioning design. We show that the achievable rate maximization problem has a rather simple form in the asymptotic regime, and we develop an efficient algorithm to find the optimal solution via one-dimensional (1D) search. Moreover, we discuss the insights and impacts of the asymptotically optimal solution on finite-size system design. By applying the asymptotic result to a finite-size system with necessary modifications, we show by numerical results that the proposed design achieves a favorable tradeoff between system performance and computational complexity.
This paper proposes a generalised propulsion energy consumption model (PECM) for rotary-wing ummanned aerial vehicles (UAVs) under the consideration of the practical thrust-to-weight ratio (TWR) with respect to the velocity, acceleration and direction change of the UAVs. To verify the effectiveness of the proposed PECM, we consider a UAV-enabled communication system, where a rotary-wing UAV serves multiple ground users as an aerial base station. We aim to maximize the energy efficiency (EE) of the UAV by jointly optimizing the user scheduling and UAV trajectory variables. However, the formulated problem is a non-convex fractional integer programming problem, which is challenging to obtain its optimal solution. To tackle this, we propose an efficient iterative algorithm by decomposing the original problem into two sub-problems to obtain a suboptimal solution based on the successive convex approximation technique. Simulation results show that the optimized UAV trajectory by applying the proposed PECM are smoother and the corresponding EE has significant improvement as compared to other benchmark schemes.
With the explosive growth of data and wireless devices, federated learning (FL) has emerged as a promising technology for large-scale intelligent systems. Utilizing the analog superposition of electromagnetic waves, over-the-air computation is an appealing approach to reduce the burden of communication in the FL model aggregation. However, with the urgent demand for intelligent systems, the training of multiple tasks with over-the-air computation further aggravates the scarcity of communication resources. This issue can be alleviated to some extent by training multiple tasks simultaneously with shared communication resources, but the latter inevitably brings about the problem of inter-task interference. In this paper, we study over-the-air multi-task FL (OA-MTFL) over the multiple-input multiple-output (MIMO) interference channel. We propose a novel model aggregation method for the alignment of local gradients for different devices, which alleviates the straggler problem that exists widely in over-the-air computation due to the channel heterogeneity. We establish a unified communication-computation analysis framework for the proposed OA-MTFL scheme by considering the spatial correlation between devices, and formulate an optimization problem of designing transceiver beamforming and device selection. We develop an algorithm by using alternating optimization (AO) and fractional programming (FP) to solve this problem, which effectively relieves the impact of inter-task interference on the FL learning performance. We show that due to the use of the new model aggregation method, device selection is no longer essential to our scheme, thereby avoiding the heavy computational burden caused by implementing device selection. The numerical results demonstrate the correctness of the analysis and the outstanding performance of the proposed scheme.
In this paper, we study the user localization and tracking problem in the reconfigurable intelligent surface (RIS) aided multiple-input multiple-output (MIMO) system, where a multi-antenna base station (BS) and multiple RISs are deployed to assist the localization and tracking of a multi-antenna user. By establishing a probability transition model for user mobility, we develop a message-passing algorithm, termed the Bayesian user localization and tracking (BULT) algorithm, to estimate and track the user position and the angle-of-arrival (AoAs) at the user in an online fashion. We also derive Bayesian Cram\'er Rao bound (BCRB) to characterize the fundamental performance limit of the considered tracking problem. To improve the tracking performance, we optimize the beamforming design at the BS and the RISs to minimize the derived BCRB. Simulation results show that our BULT algorithm can perform close to the derived BCRB, and significantly outperforms the counterpart algorithms without exploiting the temporal correlation of the user location.
This work proposes a new computational framework for learning an explicit generative model for real-world datasets. In particular we propose to learn {\em a closed-loop transcription} between a multi-class multi-dimensional data distribution and a { linear discriminative representation (LDR)} in the feature space that consists of multiple independent multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as the equilibrium point of a {\em two-player minimax game between the encoder and decoder}. A natural utility function for this game is the so-called {\em rate reduction}, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation draws inspiration from closed-loop error feedback from control systems and avoids expensive evaluating and minimizing approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, this new formulation unifies the concepts and benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a {\em both discriminative and generative} representation for multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and often better than existing methods based on GAN, VAE, or a combination of both. We notice that the so learned features of different classes are explicitly mapped onto approximately {\em independent principal subspaces} in the feature space; and diverse visual attributes within each class are modeled by the {\em independent principal components} within each subspace.
In reconfigurable intelligent surface (RIS) aided millimeter-wave (mmWave) communication systems, in order to overcome the limitation of the conventional channel state information (CSI) acquisition techniques, this paper proposes a location information assisted beamforming design without the requirement of the conventional channel training process. First, we establish the geometrical relation between the channel model and the user location, based on which we derive an approximate CSI error bound based on the user location error by means of Taylor approximation, triangle and power mean inequalities, and semidefinite relaxation (SDR). Second, for combating the uncertainty of the location error, we formulate a worst-case robust beamforming optimization problem. To solve the problem efficiently, we develop a novel iterative algorithm by utilizing various optimization tools such as Lagrange multiplier, matrix inversion lemma, SDR, as well as branch-and-bound (BnB). Particularly, the BnB algorithm is modified to acquire the phase shift solution under an arbitrary constraint of possible phase shift values. Finally, we analyse the algorithm complexity, and carry out simulations to validate the theoretical derivation of the CSI error bound and the robustness of the proposed algorithm. Compared with the existing non-robust approach and the robust beamforming techniques based on S-procedure and penalty convex-concave procedure (CCP), our method converges faster and achieves better performance in terms of the worst-case signal-to-noise ratio (SNR) at the receiver.