Abstract:User association, the problem of assigning each user device to a suitable base station, is increasingly crucial as wireless networks become denser and serve more users with diverse service demands. The joint optimization of user association and resource allocation (UARA) is a fundamental issue for future wireless networks, as it plays a pivotal role in enhancing overall network performance, user fairness, and resource efficiency. Given the latency-sensitive nature of emerging network applications, network management favors algorithms that are simple and computationally efficient rather than complex centralized approaches. Thus, distributed pricing-based strategies have gained prominence in the UARA literature, demonstrating practicality and effectiveness across various objective functions, e.g., sum-rate, proportional fairness, max-min fairness, and alpha-fairness. While the alpha-fairness frameworks allow for flexible adjustments between efficiency and fairness via a single parameter $\alpha$, existing works predominantly assume a homogeneous fairness context, assigning an identical $\alpha$ value to all users. Real-world networks, however, frequently require differentiated user prioritization due to varying application requirements and latency. To bridge this gap, we propose a novel heterogeneous alpha-fairness (HAF) objective function, assigning distinct {\alpha} values to different users, thereby providing enhanced control over the balance between throughput, fairness, and latency across the network. We present a distributed, pricing-based optimization approach utilizing an auxiliary variable framework and provide analytical proof of its convergence to an $\epsilon$-optimal solution, where the optimality gap $\epsilon$ decreases with the number of iterations.
Abstract:The integration of machine learning (ML) in cyber physical systems (CPS) is a complex task due to the challenges that arise in terms of real-time decision making, safety, reliability, device heterogeneity, and data privacy. There are also open research questions that must be addressed in order to fully realize the potential of ML in CPS. Federated learning (FL), a distributed approach to ML, has become increasingly popular in recent years. It allows models to be trained using data from decentralized sources. This approach has been gaining popularity in the CPS field, as it integrates computer, communication, and physical processes. Therefore, the purpose of this work is to provide a comprehensive analysis of the most recent developments of FL-CPS, including the numerous application areas, system topologies, and algorithms developed in recent years. The paper starts by discussing recent advances in both FL and CPS, followed by their integration. Then, the paper compares the application of FL in CPS with its applications in the internet of things (IoT) in further depth to show their connections and distinctions. Furthermore, the article scrutinizes how FL is utilized in critical CPS applications, e.g., intelligent transportation systems, cybersecurity services, smart cities, and smart healthcare solutions. The study also includes critical insights and lessons learned from various FL-CPS implementations. The paper's concluding section delves into significant concerns and suggests avenues for further research in this fast-paced and dynamic era.
Abstract:Machine learning (ML)-based feedback channel coding has garnered significant research interest in the past few years. However, there has been limited research exploring ML approaches in the so-called "two-way" setting where two users jointly encode messages and feedback for each other over a shared channel. In this work, we present a general architecture for ML-based two-way feedback coding, and show how several popular one-way schemes can be converted to the two-way setting through our framework. We compare such schemes against their one-way counterparts, revealing error-rate benefits of ML-based two-way coding in certain signal-to-noise ratio (SNR) regimes. We then analyze the tradeoffs between error performance and computational overhead for three state-of-the-art neural network coding models instantiated in the two-way paradigm.
Abstract:Augmenting federated learning (FL) with device-to-device (D2D) communications can help improve convergence speed and reduce model bias through local information exchange. However, data privacy concerns, trust constraints between devices, and unreliable wireless channels each pose challenges in finding an effective yet resource efficient D2D graph structure. In this paper, we develop a decentralized reinforcement learning (RL) method for D2D graph discovery that promotes communication of impactful datapoints over reliable links for multiple learning paradigms, while following both data and device-specific trust constraints. An independent RL agent at each device trains a policy to predict the impact of incoming links in a decentralized manner without exposure of local data or significant communication overhead. For supervised settings, the D2D graph aims to improve device-specific label diversity without compromising system-level performance. For semi-supervised settings, we enable this by incorporating distributed label propagation. For unsupervised settings, we develop a variation-based diversity metric which estimates data diversity in terms of occupied latent space. Numerical experiments on five widely used datasets confirm that the data diversity improvements induced by our method increase convergence speed by up to 3 times while reducing energy consumption by up to 5 times. They also show that our method is resilient to stragglers and changes in the aggregation interval. Finally, we show that our method offers scalability benefits for larger system sizes without increases in relative overhead, and adaptability to various downstream FL architectures and to dynamic wireless environments.
Abstract:In recent years, machine learning (ML) methods have become increasingly popular in wireless communication systems for several applications. A critical bottleneck for designing ML systems for wireless communications is the availability of realistic wireless channel datasets, which are extremely resource intensive to produce. To this end, the generation of realistic wireless channels plays a key role in the subsequent design of effective ML algorithms for wireless communication systems. Generative models have been proposed to synthesize channel matrices, but outputs produced by such methods may not correspond to geometrically viable channels and do not provide any insight into the scenario of interest. In this work, we aim to address both these issues by integrating a parametric, physics-based geometric channel (PBGC) modeling framework with generative methods. To address limitations with gradient flow through the PBGC model, a linearized reformulation is presented, which ensures smooth gradient flow during generative model training, while also capturing insights about the underlying physical environment. We evaluate our model against prior baselines by comparing the generated samples in terms of the 2-Wasserstein distance and through the utility of generated data when used for downstream compression tasks.
Abstract:Precise channel state knowledge is crucial in future wireless communication systems, which drives the need for accurate channel prediction without additional pilot overhead. While machine-learning (ML) methods for channel prediction show potential, existing approaches have limitations in their capability to adapt to environmental changes due to their extensive training requirements. In this paper, we introduce the channel prediction approaches in terms of the temporal channel prediction and the environmental adaptation. Then, we elaborate on the use of the advanced ML-based channel prediction to resolve the issues in traditional ML methods. The numerical results show that the advanced ML-based channel prediction has comparable accuracy with much less training overhead compared to conventional prediction methods. Also, we examine the training process, dataset characteristics, and the impact of source tasks and pre-trained models on channel prediction approaches. Finally, we discuss open challenges and possible future research directions of ML-based channel prediction.
Abstract:Recently, there has been an increasing interest in 6G technology for integrated sensing and communications, where positioning stands out as a key application. In the realm of 6G, cell-free massive multiple-input multiple-output (MIMO) systems, featuring distributed base stations equipped with a large number of antennas, present an abundant source of angle-of-arrival (AOA) information that could be exploited for positioning applications. In this paper we leverage this AOA information at the base stations using the multiple signal classification (MUSIC) algorithm, in conjunction with received signal strength (RSS) for positioning through Gaussian process regression (GPR). An AOA fingerprint database is constructed by capturing the angle data from multiple locations across the network area and is combined with RSS data from the same locations to form a hybrid fingerprint which is then used to train a GPR model employing a squared exponential kernel. The trained regression model is subsequently utilized to estimate the location of a user equipment. Simulations demonstrate that the GPR model with hybrid input achieves better positioning accuracy than traditional GPR models utilizing RSS-only and AOA-only inputs.
Abstract:Federated learning (FL) is vulnerable to backdoor attacks, where adversaries alter model behavior on target classification labels by embedding triggers into data samples. While these attacks have received considerable attention in horizontal FL, they are less understood for vertical FL (VFL), where devices hold different features of the samples, and only the server holds the labels. In this work, we propose a novel backdoor attack on VFL which (i) does not rely on gradient information from the server and (ii) considers potential collusion among multiple adversaries for sample selection and trigger embedding. Our label inference model augments variational autoencoders with metric learning, which adversaries can train locally. A consensus process over the adversary graph topology determines which datapoints to poison. We further propose methods for trigger splitting across the adversaries, with an intensity-based implantation scheme skewing the server towards the trigger. Our convergence analysis reveals the impact of backdoor perturbations on VFL indicated by a stationarity gap for the trained model, which we verify empirically as well. We conduct experiments comparing our attack with recent backdoor VFL approaches, finding that ours obtains significantly higher success rates for the same main task performance despite not using server information. Additionally, our results verify the impact of collusion on attack performance.
Abstract:As 6G and beyond networks grow increasingly complex and interconnected, federated learning (FL) emerges as an indispensable paradigm for securely and efficiently leveraging decentralized edge data for AI. By virtue of the superposition property of communication signals, over-the-air FL (OtA-FL) achieves constant communication overhead irrespective of the number of edge devices (EDs). However, training neural networks over the air still incurs substantial communication costs, as the number of transmitted symbols equals the number of trainable parameters. To alleviate this issue, the most straightforward approach is to reduce the number of transmitted symbols by 1) gradient compression and 2) gradient sparsification. Unfortunately, these methods are incompatible with OtA-FL due to the loss of its superposition property. In this work, we introduce federated zeroth-order estimation (Fed-ZOE), an efficient framework inspired by the randomized gradient estimator (RGE) commonly used in zeroth-order optimization (ZOO). In FedZOE, EDs perform local weight updates as in standard FL, but instead of transmitting full gradient vectors, they send compressed local model update vectors in the form of several scalar-valued inner products between the local model update vectors and random vectors. These scalar values enable the parameter server (PS) to reconstruct the gradient using the RGE trick with highly reduced overhead, as well as preserving the superposition property. Unlike conventional ZOO leveraging RGE for step-wise gradient descent, Fed-ZOE compresses local model update vectors before transmission, thereby achieving higher accuracy and computational efficiency. Numerical evaluations using ResNet-18 on datasets such as CIFAR-10, TinyImageNet, SVHN, CIFAR-100, and Brain-CT demonstrate that Fed-ZOE achieves performance comparable to Fed-OtA while drastically reducing communication costs.
Abstract:Integrated sensing and communications (ISAC) is a key enabler of new applications, such as precision agriculture, extended reality (XR), and digital twins, for 6G wireless systems. However, the implementation of ISAC technology is very challenging due to practical constraints such as high complexity. In this paper, we introduce a novel ISAC waveform design strategy, called the spatial-division ISAC (SD-ISAC) waveform, which simplifies the ISAC waveform design problem by decoupling it into separate communication and radar waveform design tasks. Specifically, the proposed strategy leverages the null-space of the communication channel to superimpose sensing signals onto communication signals without interference. This approach offers multiple benefits, including reduced complexity and the reuse of existing communication and radar waveforms. We then address the problem of optimizing the spatial and temporal properties of the proposed waveform. We develop a low-complexity beampattern matching algorithm, leveraging a majorization-minimization (MM) technique. Furthermore, we develop a range sidelobe suppression algorithm based on manifold optimization. We provide comprehensive discussions on the practical advantages and potential challenges of the proposed method, including null-space feedback. We evaluate the performance of the proposed waveform design algorithm through extensive simulations. Simulation results show that the proposed method can provide similar or even superior performance to existing ISAC algorithms while reducing computation time significantly.