In this paper, federated learning (FL) over wireless networks is investigated. In each communication round, a subset of devices is selected to participate in the aggregation with limited time and energy. In order to minimize the convergence time, global loss and latency are jointly considered in a Stackelberg game based framework. Specifically, age of information (AoI) based device selection is considered at leader-level as a global loss minimization problem, while sub-channel assignment, computational resource allocation, and power allocation are considered at follower-level as a latency minimization problem. By dividing the follower-level problem into two sub-problems, the best response of the follower is obtained by a monotonic optimization based resource allocation algorithm and a matching based sub-channel assignment algorithm. By deriving the upper bound of convergence rate, the leader-level problem is reformulated, and then a list based device selection algorithm is proposed to achieve Stackelberg equilibrium. Simulation results indicate that the proposed device selection scheme outperforms other schemes in terms of the global loss, and the developed algorithms can significantly decrease the time consumption of computation and communication.
This paper presents a systematic investigation on codebook design of sparse code multiple access (SCMA) communication in downlink satellite Internet-of-Things (S-IoT) systems that are generally characterized by Rician fading channels. To serve a huge number of low-end IoT sensors, we aim to develop enhanced SCMA codebooks which enable ultra-low decoding complexity, while achieving good error performance. By analysing the pair-wise probability in Rician fading channels, we deduce the design metrics for multi-dimensional constellation construction and sparse codebook optimization. To reduce the decoding complexity, we advocate the key idea of projecting the multi-dimensional constellation elements to a few overlapped complex numbers in each dimension, called low projection (LP). We consider golden angle modulation (GAM), thus the resultant multi-dimensional constellation is called LPGAM. With the proposed design metrics and based on LPGAM, we propose an efficient approach of multi-stage optimization of sparse codebooks. Numerical and simulation results show the superiority of the proposed LP codebooks (LPCBs) in terms of decoding complexity and error rate performance. In particular, some of the proposed LPCBs can reduce the decoding complexity by 97\% compared to the conventional codebooks, and own the largest minimum Euclidean distance among existing codebooks. The proposed LPCBs are available at \url{https://github.com/ethanlq/SCMA-codebook}.
In this paper, a novel nonlinear precoding (NLP) technique, namely constellation-oriented perturbation (COP), is proposed to tackle the scalability problem inherent in conventional NLP techniques. The basic concept of COP is to apply vector perturbation (VP) in the constellation domain instead of symbol domain; as often used in conventional techniques. By this means, the computational complexity of COP is made independent to the size of multi-antenna (i.e., MIMO) networks. Instead, it is related to the size of symbol constellation. Through widely linear transform, it is shown that COP has its complexity flexibly scalable in the constellation domain to achieve a good complexity-performance tradeoff. Our computer simulations show that COP can offer very comparable performance with the optimum VP in small MIMO systems. Moreover, it significantly outperforms current sub-optimum VP approaches (such as degree-2 VP) in large MIMO whilst maintaining much lower computational complexity.
Concerning ultra-reliable low-latency communication (URLLC) for the downlink operating in the frequency-division multiple-access with random channel assignment, a lightweight power allocation approach is proposed to maximize the number of URLLC users subject to transmit-power and individual user-reliability constraints. Provided perfect channel-state-information at the transmitter (CSIT), the proposed approach is proven to ensure maximized URLLC users. Assuming imperfect CSIT, the proposed approach still aims to maximize the URLLC users without compromising the individual user reliability by using a pessimistic evaluation of the channel gain. It is demonstrated, through numerical results, that the proposed approach can significantly improve the user capacity and the transmit-power efficiency in Rayleigh fading channels. With imperfect CSIT, the proposed approach can still provide remarkable user capacity at limited cost of transmit-power efficiency.
Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, that address two fundamental questions regarding Intelligence: what to learn and how to learn, respectively. We believe the two principles are the cornerstones for the emergence of Intelligence, artificial or natural. While these two principles have rich classical roots, we argue that they can be stated anew in entirely measurable and computable ways. More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks and many artificial intelligence practices. While we mainly use modeling of visual data as an example, we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.
State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously performant for all clients, current federated optimization methods fail to converge to a comparable solution. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network's empirical neural tangent kernel approximation. Our technique yields accuracy improvements of up to +36% on FMNIST and +37% on CIFAR10 when clients have dissimilar data.
We consider the problem of learning discriminative representations for data in a high-dimensional space with distribution supported on or around multiple low-dimensional linear subspaces. That is, we wish to compute a linear injective map of the data such that the features lie on multiple orthogonal subspaces. Instead of treating this learning problem using multiple PCAs, we cast it as a sequential game using the closed-loop transcription (CTRL) framework recently proposed for learning discriminative and generative representations for general low-dimensional submanifolds. We prove that the equilibrium solutions to the game indeed give correct representations. Our approach unifies classical methods of learning subspaces with modern deep learning practice, by showing that subspace learning problems may be provably solved using the modern toolkit of representation learning. In addition, our work provides the first theoretical justification for the CTRL framework, in the important case of linear subspaces. We support our theoretical findings with compelling empirical evidence. We also generalize the sequential game formulation to more general representation learning problems. Our code, including methods for easy reproduction of experimental results, is publically available on GitHub.
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains. Uncertainty quantification is all the more challenging when training distribution and test distribution are different, even the distribution shifts are mild. Despite the ubiquity of distribution shifts in real-world applications, existing uncertainty quantification approaches mainly study the in-distribution setting where the train and test distributions are the same. In this paper, we develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains. Our proposed method -- multi-domain temperature scaling -- uses the heterogeneity in the domains to improve calibration robustness under distribution shift. Through experiments on three benchmark data sets, we find our proposed method outperforms existing methods as measured on both in-distribution and out-of-distribution test sets.
Deep Reinforcement Learning (DRL) has been a promising solution to many complex decision-making problems. Nevertheless, the notorious weakness in generalization among environments prevent widespread application of DRL agents in real-world scenarios. Although advances have been made recently, most prior works assume sufficient online interaction on training environments, which can be costly in practical cases. To this end, we focus on an \textit{offline-training-online-adaptation} setting, in which the agent first learns from offline experiences collected in environments with different dynamics and then performs online policy adaptation in environments with new dynamics. In this paper, we propose Policy Adaptation with Decoupled Representations (PAnDR) for fast policy adaptation. In offline training phase, the environment representation and policy representation are learned through contrastive learning and policy recovery, respectively. The representations are further refined by mutual information optimization to make them more decoupled and complete. With learned representations, a Policy-Dynamics Value Function (PDVF) (Raileanu et al., 2020) network is trained to approximate the values for different combinations of policies and environments. In online adaptation phase, with the environment context inferred from few experiences collected in new environments, the policy is optimized by gradient ascent with respect to the PDVF. Our experiments show that PAnDR outperforms existing algorithms in several representative policy adaptation problems.
The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization. However, despite the advantages that have been shown for MCR$^2$ training, MCR$^2$ suffers from a significant computational cost due to the need to evaluate and differentiate a significant number of log-determinant terms that grows linearly with the number of classes. By taking advantage of variational forms of spectral functions of a matrix, we reformulate the MCR$^2$ objective to a form that can scale significantly without compromising training accuracy. Experiments in image classification demonstrate that our proposed formulation results in a significant speed up over optimizing the original MCR$^2$ objective directly and often results in higher quality learned representations. Further, our approach may be of independent interest in other models that require computation of log-determinant forms, such as in system identification or normalizing flow models.