In this paper, the novel simultaneously transmitting and reflecting (STAR) reconfigurable intelligent surface (RIS), which enables full-space coverage on users located on both sides of the surface, is investigated in the multi-user mobile edge computing (MEC) system. A computation rate maximization problem is formulated via the joint design of the STAR-RIS phase shifts, reflection and transmission amplitude coefficients, the receive beamforming vectors at the access point, and the users' energy partition strategies for local computing and offloading. Two operating protocols of STAR-RIS, namely energy splitting (ES) and mode switching (MS) are studied. Based on DC programming and semidefinite relaxation, an iterative algorithm is proposed for the ES protocol to solve the formulated non-convex problem. Furthermore, the proposed algorithm is extended to solve the non-convex, non-continuous MS problems with binary amplitude coefficients. Simulation results show that the resultant STAR-RIS-aided MEC system significantly improves the computation rate compared to the baseline scheme with conventional reflect-only/transmit-only RIS.
Weakly supervised detection of anomalies in surveillance videos is a challenging task. Going beyond existing works that have deficient capabilities to localize anomalies in long videos, we propose a novel glance and focus network to effectively integrate spatial-temporal information for accurate anomaly detection. In addition, we empirically found that existing approaches that use feature magnitudes to represent the degree of anomalies typically ignore the effects of scene variations, and hence result in sub-optimal performance due to the inconsistency of feature magnitudes across scenes. To address this issue, we propose the Feature Amplification Mechanism and a Magnitude Contrastive Loss to enhance the discriminativeness of feature magnitudes for detecting anomalies. Experimental results on two large-scale benchmarks UCF-Crime and XD-Violence manifest that our method outperforms state-of-the-art approaches.
Cooperative beamforming design has been recognized as an effective approach in modern wireless networks to meet the dramatically increasing demand of various wireless data traffics. It is formulated as an optimization problem in conventional approaches and solved iteratively in an instance-by-instance manner. Recently, learning-based methods have emerged with real-time implementation by approximating the mapping function from the problem instances to the corresponding solutions. Among various neural network architectures, graph neural networks (GNNs) can effectively utilize the graph topology in wireless networks to achieve better generalization ability on unseen problem sizes. However, the current GNNs are only equipped with the node-update mechanism, which restricts it from modeling more complicated problems such as the cooperative beamforming design, where the beamformers are on the graph edges of wireless networks. To fill this gap, we propose an edge-graph-neural-network (Edge-GNN) by incorporating an edge-update mechanism into the GNN, which learns the cooperative beamforming on the graph edges. Simulation results show that the proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches, and generalizes well to different numbers of base stations and user equipments.
Device activity detection in the emerging cell-free massive multiple-input multiple-output (MIMO) systems has been recognized as a crucial task in machine-type communications, in which multiple access points (APs) jointly identify the active devices from a large number of potential devices based on the received signals. Most of the existing works addressing this problem rely on the impractical assumption that different active devices transmit signals synchronously. However, in practice, synchronization cannot be guaranteed due to the low-cost oscillators, which brings additional discontinuous and nonconvex constraints to the detection problem. To address this challenge, this paper reveals an equivalent reformulation to the asynchronous activity detection problem, which facilitates the development of a centralized algorithm and a distributed algorithm that satisfy the highly nonconvex constraints in a gentle fashion as the iteration number increases, so that the sequence generated by the proposed algorithms can get around bad stationary points. To reduce the capacity requirements of the fronthauls, we further design a communication-efficient accelerated distributed algorithm. Simulation results demonstrate that the proposed centralized and distributed algorithms outperform state-of-the-art approaches, and the proposed accelerated distributed algorithm achieves a close detection performance to that of the centralized algorithm but with a much smaller number of bits to be transmitted on the fronthaul links.
Mobile edge computing (MEC) is envisioned as a promising technique to support computation-intensive and timecritical applications in future Internet of Things (IoT) era. However, the uplink transmission performance will be highly impacted by the hostile wireless channel, the low bandwidth, and the low transmission power of IoT devices. Recently, intelligent reflecting surface (IRS) has drawn much attention because of its capability to control the wireless environments so as to enhance the spectrum and energy efficiencies of wireless communications. In this paper, we consider an IRS-aided multidevice MEC system where each IoT device follows the binary offloading policy, i.e., a task has to be computed as a whole either locally or remotely at the edge server. We aim to minimize the total energy consumption of devices by jointly optimizing the binary offloading modes, the CPU frequencies, the offloading powers, the offloading times and the IRS phase shifts for all devices. Two algorithms, which are greedy-based and penalty-based, are proposed to solve the challenging nonconvex and discontinuous problem. It is found that the penalty-based method has only linear complexity with respect to the number of devices, but it performs close to the greedy-based method with cubic complexity with respect to number of devices. Furthermore, binary offloading via IRS indeed saves more energy compared to the case without IRS.
Reconfigurable intelligent surfaces (RISs) have a revolutionary capability to customize the radio propagation environment for wireless networks. To fully exploit the advantages of RISs in wireless systems, the phases of the reflecting elements must be jointly designed with conventional communication resources, such as beamformers, transmit power, and computation time. However, due to the unique constraints on the phase shift, and massive numbers of reflecting units and users in large-scale networks, the resulting optimization problems are challenging to solve. This paper provides a review of current optimization methods and artificial intelligence-based methods for handling the constraints imposed by RIS and compares them in terms of solution quality and computational complexity. Future challenges in phase shift optimization involving RISs are also described and potential solutions are discussed.
Recently, there is a revival of interest in low-rank matrix completion-based unsupervised learning through the lens of dual-graph regularization, which has significantly improved the performance of multidisciplinary machine learning tasks such as recommendation systems, genotype imputation and image inpainting. While the dual-graph regularization contributes a major part of the success, computational costly hyper-parameter tunning is usually involved. To circumvent such a drawback and improve the completion performance, we propose a novel Bayesian learning algorithm that automatically learns the hyper-parameters associated with dual-graph regularization, and at the same time, guarantees the low-rankness of matrix completion. Notably, a novel prior is devised to promote the low-rankness of the matrix and encode the dual-graph information simultaneously, which is more challenging than the single-graph counterpart. A nontrivial conditional conjugacy between the proposed priors and likelihood function is then explored such that an efficient algorithm is derived under variational inference framework. Extensive experiments using synthetic and real-world datasets demonstrate the state-of-the-art performance of the proposed learning algorithm for various data analysis tasks.
Reconfigurable intelligent surface (RIS) is a promising solution to enhance the performance of wireless communications via reconfiguring the wireless propagation environment. In this paper, we investigate the joint design of RIS passive beamforming and subcarrier matching in RIS-assisted orthogonal frequency division multiplexing (OFDM) dual-hop relaying systems under two cases, depending on the presence of the RIS reflected link from the source to the destination in the first hop. Accordingly, we formulate a mixed-integer nonlinear programming (MINIP) problem to maximize the sum achievable rate over all subcarriers by jointly optimizing the RIS passive beamforming and subcarrier matching. To solve this challenging problem, we first develop a branch-and-bound (BnB)-based alternating optimization algorithm to obtain a near-optimal solution by alternatively optimizing the subcarrier matching by the BnB method and the RIS passive beamforming by using semidefinite relaxation techniques. Then, a low-complexity difference-of-convex penalty-based algorithm is proposed to reduce the computation complexity in the BnB method. To further reduce the computational complexity, we utilize the learning-to-optimize approach to learn the joint design obtained from optimization techniques, which is more amenable to practical implementations. Lastly, computer simulations are presented to evaluate the performance of the proposed algorithms in the two cases. Simulation results demonstrate that the RIS-assisted OFDM relaying system achieves sustainable achievable rate gain as compared to that without RIS, and that with random passive beamforming, since RIS passive beamforming can be leveraged to recast the subcarrier matching among different subcarriers and balance the signal-to-noise ratio within each subcarrier pair.
To support the modern machine-type communications, a crucial task during the random access phase is device activity detection, which is to detect the active devices from a large number of potential devices based on the received signal at the access point. By utilizing the statistical properties of the channel, state-of-the-art covariance based methods have been demonstrated to achieve better activity detection performance than compressed sensing based methods. However, covariance based methods require to solve a high dimensional nonconvex optimization problem by updating the estimate of the activity status of each device sequentially. Since the number of updates is proportional to the device number, the computational complexity and delay make the iterative updates difficult for real-time implementation especially when the device number scales up. Inspired by the success of deep learning for real-time inference, this paper proposes a learning based method with a customized heterogeneous transformer architecture for device activity detection. By adopting an attention mechanism in the architecture design, the proposed method is able to extract the relevance between device pilots and received signal, is permutation equivariant with respect to devices, and is scale adaptable to different numbers of devices. Simulation results demonstrate that the proposed method achieves better activity detection performance with much shorter computation time than state-of-the-art covariance approach, and generalizes well to different numbers of devices, BS-antennas, and different signal-to-noise ratios.
Wireless powered backscatter communications (WPBC) is capable of implementing ultra-low-power communication, thus promising in the Internet of Things (IoT) networks. In practice, however, it is challenging to apply WPBC in large-scale IoT networks because of its short communication range. To address this challenge, this paper exploits an unmanned ground vehicle (UGV) to assist WPBC in large-scale IoT networks. In particular, we investigate the joint design of network planning and dynamic resource allocation of the access point (AP), tag reader, and UGV to minimize the total energy consumption. Also, the AP can operate in either half-duplex (HD) or full-duplex (FD) multiplexing mode. Under HD mode, the optimal cell radius is derived and the optimal power allocation and transmit/receive beamforming are obtained in closed form. Under FD mode, the optimal resource allocation, as well as two suboptimal ones with low computational complexity, is developed. Simulation results disclose that dynamic power allocation at the tag reader rather than at the AP dominates the network energy efficiency while the AP operating in FD mode outperforms that in HD mode concerning energy efficienc