This paper investigates intelligent reflecting surface (IRS) enabled non-line-of-sight (NLoS) wireless sensing, in which an IRS is dedicatedly deployed to assist an access point (AP) to sense a target at its NLoS region. It is assumed that the AP is equipped with multiple antennas and the IRS is equipped with a uniform linear array. We consider two types of target models, namely the point and extended targets, for which the AP aims to estimate the target's direction-of-arrival (DoA) and the target response matrix with respect to the IRS, respectively, based on the echo signals from the AP-IRS-target-IRS-AP link. Under this setup, we jointly design the transmit beamforming at the AP and the reflective beamforming at the IRS to minimize the Cram\'er-Rao bound (CRB) on the estimation error. Towards this end, we first obtain the CRB expressions for the two target models in closed form. It is shown that in the point target case, the CRB for estimating the DoA depends on both the transmit and reflective beamformers; while in the extended target case, the CRB for estimating the target response matrix only depends on the transmit beamformers. Next, for the point target case, we optimize the joint beamforming design to minimize the CRB, via alternating optimization, semi-definite relaxation, and successive convex approximation. For the extended target case, we obtain the optimal transmit beamforming solution to minimize the CRB in closed form. Finally, numerical results show that for both cases, the proposed designs based on CRB minimization achieve improved sensing performance in terms of mean squared error, as compared to other traditional schemes.
With recent advancements, the wireless local area network (WLAN) or wireless fidelity (Wi-Fi) technology has been successfully utilized to realize sensing functionalities such as detection, localization, and recognition. However, the WLANs standards are developed mainly for the purpose of communication, and thus may not be able to meet the stringent sensing requirements in emerging applications. To resolve this issue, a new Task Group (TG), namely IEEE 802.11bf, has been established by the IEEE 802.11 working group, with the objective of creating a new amendment to the WLAN standard to provide advanced sensing requirements while minimizing the effect on communications. This paper provides a comprehensive overview on the up-to-date efforts in the IEEE 802.11bf TG. First, we introduce the definition of the 802.11bf amendment and its standardization timeline. Then, we discuss the WLAN sensing procedure and framework used for measurement acquisition, by considering both conventional sensing at sub-7 GHz and directional multi-gigabit (DMG) sensing at 60 GHz, respectively. Next, we present various candidate technical features for IEEE 802.11bf, including waveform/sequence design, feedback types, quantization, as well as security and privacy. Finally, we describe the methodologies used by the IEEE 802.11bf TG to evaluate the alternative performance. It is desired that this overview paper provide useful insights on IEEE 802.11 WLAN sensing to people with great interests and promote the IEEE 802.11bf standard to be widely deployed.
This letter studies the energy-efficient design in a downlink multi-antenna multi-user system consisting of a multi-antenna base station (BS) and multiple single-antenna users, by considering the practical non-linear power amplifier (PA) efficiency and the on-off power consumption of radio frequency (RF) chain at each transmit antenna. Under this setup, we jointly optimize the transmit beamforming and antenna on/off selection at the BS to minimize its total power consumption while ensuring the individual signal-to-interference-plus-noise ratio (SINR) constraints at the users. However, due to the non-linear PA efficiency and the on-off RF chain power consumption, the formulated SINR-constrained power minimization problem is highly non-convex and difficult to solve. To tackle this issue, we propose an efficient algorithm to obtain a high-quality solution based on the technique of sequential convex approximation (SCA). We provide numerical results to validate the performance of our proposed design. It is shown that at the optimized solution, the BS tends to activate fewer antennas and use higher power transmission at each antenna to exploit the non-linear PA efficiency.
Channel knowledge map (CKM) has recently emerged to facilitate the placement and trajectory optimization for unmanned aerial vehicle (UAV) communications. This paper investigates a CKM-assisted multi-UAV wireless network, by focusing on the construction and utilization of CKMs for multi-UAV placement optimization. First, we consider the CKM construction problem when data measurements for only a limited number of points are available. Towards this end, we exploit a data-driven interpolation technique to construct CKMs to characterize the signal propagation environments. Next, we study the multi-UAV placement optimization problem by utilizing the constructed CKMs, in which the multiple UAVs aim to optimize their placement locations to maximize the weighted sum rate with their respectively associated ground base stations (GBSs). However, the rate function based on the CKMs is generally non-differentiable. To tackle this issue, we propose a novel iterative algorithm based on derivative-free optimization, in which a series of quadratic functions are iteratively constructed to approximate the objective function under a set of interpolation conditions, and accordingly, the UAVs' placement locations are updated by maximizing the approximate function subject to a trust region constraint. Finally, numerical results are presented to validate the proposed design achieves near-optimal performance, but with much lower implementation complexity.
This paper studies a new multi-device edge artificial-intelligent (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC) to enable low-latency intelligent services at the network edge. In this system, multiple ISAC devices perform radar sensing to obtain multi-view data, and then offload the quantized version of extracted features to a centralized edge server, which conducts model inference based on the cascaded feature vectors. Under this setup and by considering classification tasks, we measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain, which is defined as the distance of two classes in the Euclidean feature space under normalized covariance. To maximize the discriminant gain, we first quantify the influence of the sensing, computation, and communication processes on it with a derived closed-form expression. Then, an end-to-end task-oriented resource management approach is developed by integrating the three processes into a joint design. This integrated sensing, computation, and communication (ISCC) design approach, however, leads to a challenging non-convex optimization problem, due to the complicated form of discriminant gain and the device heterogeneity in terms of channel gain, quantization level, and generated feature subsets. Remarkably, the considered non-convex problem can be optimally solved based on the sum-of-ratios method. This gives the optimal ISCC scheme, that jointly determines the transmit power and time allocation at multiple devices for sensing and communication, as well as their quantization bits allocation for computation distortion control. By using human motions recognition as a concrete AI inference task, extensive experiments are conducted to verify the performance of our derived optimal ISCC scheme.
Social bots are referred to as the automated accounts on social networks that make attempts to behave like human. While Graph Neural Networks (GNNs) has been massively applied to the field of social bot detection, a huge amount of domain expertise and prior knowledge is heavily engaged in the state-of-the art approaches to design a dedicated neural network architecture for a specific classification task. Involving oversized nodes and network layers in the model design, however, usually causes the over-smoothing problem and the lack of embedding discrimination. In this paper, we propose RoSGAS, a novel Reinforced and Self-supervised GNN Architecture Search framework to adaptively pinpoint the most suitable multi-hop neighborhood and the number of layers in the GNN architecture. More specifically, we consider the social bot detection problem as a user-centric subgraph embedding and classification task. We exploit heterogeneous information network to present the user connectivity by leveraging account metadata, relationships, behavioral features and content features. RoSGAS uses a multi-agent deep reinforcement learning (RL) mechanism for navigating the search of optimal neighborhood and network layers to learn individually the subgraph embedding for each target user. A nearest neighbor mechanism is developed for accelerating the RL training process, and RoSGAS can learn more discriminative subgraph embedding with the aid of self-supervised learning. Experiments on 5 Twitter datasets show that RoSGAS outperforms the state-of-the-art approaches in terms of accuracy, training efficiency and stability, and has better generalization when handling unseen samples.
This paper studies asynchronous Federated Learning (FL) subject to clients' individual arbitrary communication patterns with the parameter server. We propose FedMobile, a new asynchronous FL algorithm that exploits the mobility attribute of the mobile FL system to improve the learning performance. The key idea is to leverage the random client-to-client communication in a mobile network to create additional indirect communication opportunities with the server via upload and download relaying. We prove that FedMobile achieves a convergence rate $O(\frac{1}{\sqrt{NT}})$, where $N$ is the number of clients and $T$ is the number of communication slots, and show that the optimal design involves an interesting trade-off on the best timing of relaying. Our analysis suggests that with an increased level of mobility, asynchronous FL converges faster using FedMobile. Experiment results on a synthetic dataset and two real-world datasets verify our theoretical findings.
This paper studies the multi-antenna multicast channel with integrated sensing and communication (ISAC), in which a multi-antenna base station (BS) sends common messages to a set of single-antenna communication users (CUs) and simultaneously estimates the parameters of an extended target via radar sensing. We investigate the fundamental performance limits of this ISAC system, in terms of the achievable rate for communication and the estimation Cram\'er-Rao bound (CRB) for sensing. First, we derive the optimal transmit covariance in semi-closed form to balance the CRB-rate (C-R) tradeoff, and accordingly characterize the outer bound of a so-called C-R region. It is shown that the optimal transmit covariance should be of full rank, consisting of both information-carrying and dedicated sensing signals in general. Next, we consider a practical joint information and sensing beamforming design, and propose an efficient approach to optimize the joint beamforming for balancing the C-R tradeoff. Numerical results are presented to show the C-R region achieved by the optimal transmit covariance and the joint beamforming, as compared to other benchmark schemes.
Federated learning (FL) is an outstanding distributed machine learning framework due to its benefits on data privacy and communication efficiency. Since full client participation in many cases is infeasible due to constrained resources, partial participation FL algorithms have been investigated that proactively select/sample a subset of clients, aiming to achieve learning performance close to the full participation case. This paper studies a passive partial client participation scenario that is much less well understood, where partial participation is a result of external events, namely client dropout, rather than a decision of the FL algorithm. We cast FL with client dropout as a special case of a larger class of FL problems where clients can submit substitute (possibly inaccurate) local model updates. Based on our convergence analysis, we develop a new algorithm FL-FDMS that discovers friends of clients (i.e., clients whose data distributions are similar) on-the-fly and uses friends' local updates as substitutes for the dropout clients, thereby reducing the substitution error and improving the convergence performance. A complexity reduction mechanism is also incorporated into FL-FDMS, making it both theoretically sound and practically useful. Experiments on MNIST and CIFAR-10 confirmed the superior performance of FL-FDMS in handling client dropout in FL.
Extremely large-scale multiple-input multiple-output (XL-MIMO) is the development trend of future wireless communications. However, the extremely large-scale antenna array could bring inevitable nearfield and dual-wideband effects that seriously reduce the transmission performance. This paper proposes an algorithmic framework to design the beam combining for the near-field wideband XL-MIMO uplink transmissions assisted by holographic metasurface antennas (HMAs). Firstly, we introduce a spherical-wave-based channel model that simultaneously takes into account both the near-field and dual-wideband effects. Based on such a model, we then formulate the HMA-based beam combining problem for the proposed XL-MIMO communications, which is challenging due to the nonlinear coupling of high dimensional HMA weights and baseband combiners. We further present a sum-mean-square-error-minimization-based algorithmic framework. Numerical results showcase that the proposed scheme can effectively alleviate the sum-rate loss caused by the near-field and dual-wideband effects in HMA-assisted XL-MIMO systems. Meanwhile, the proposed HMA-based scheme can achieve a higher sum rate than the conventional phase-shifter-based hybrid analog/digital one with the same array aperture.