Mounting a reconfigurable intelligent surface (RIS) on an unmanned aerial vehicle (UAV) holds promise for improving traditional terrestrial network performance. Unlike conventional methods deploying passive RIS on UAVs, this study delves into the efficacy of an aerial active RIS (AARIS). Specifically, the downlink transmission of an AARIS network is investigated, where the base station (BS) leverages rate-splitting multiple access (RSMA) for effective interference management and benefits from the support of an AARIS for jointly amplifying and reflecting the BS's transmit signals. Considering both the non-trivial energy consumption of the active RIS and the limited energy storage of the UAV, we propose an innovative element selection strategy for optimizing the on/off status of RIS elements, which adaptively and remarkably manages the system's power consumption. To this end, a resource management problem is formulated, aiming to maximize the system energy efficiency (EE) by jointly optimizing the transmit beamforming at the BS, the element activation, the phase shift and the amplification factor at the RIS, the RSMA common data rate at users, as well as the UAV's trajectory. Due to the dynamicity nature of UAV and user mobility, a deep reinforcement learning (DRL) algorithm is designed for resource allocation, utilizing meta-learning to adaptively handle fast time-varying system dynamics. Simulations indicate that incorporating an active RIS at the UAV leads to substantial EE gain, compared to passive RIS-aided UAV. We observe the superiority of the RSMA-based AARIS system in terms of EE, compared to existing approaches adopting non-orthogonal multiple access (NOMA).
This paper proposes a blockchain-secured deep reinforcement learning (BC-DRL) optimization framework for {data management and} resource allocation in decentralized {wireless mobile edge computing (MEC)} networks. In our framework, {we design a low-latency reputation-based proof-of-stake (RPoS) consensus protocol to select highly reliable blockchain-enabled BSs to securely store MEC user requests and prevent data tampering attacks.} {We formulate the MEC resource allocation optimization as a constrained Markov decision process that balances minimum processing latency and denial-of-service (DoS) probability}. {We use the MEC aggregated features as the DRL input to significantly reduce the high-dimensionality input of the remaining service processing time for individual MEC requests. Our designed constrained DRL effectively attains the optimal resource allocations that are adapted to the dynamic DoS requirements. We provide extensive simulation results and analysis to} validate that our BC-DRL framework achieves higher security, reliability, and resource utilization efficiency than benchmark blockchain consensus protocols and {MEC} resource allocation algorithms.
This paper designs a graph neural network (GNN) to improve bandwidth allocations for multiple legitimate wireless users transmitting to a base station in the presence of an eavesdropper. To improve the privacy and prevent eavesdropping attacks, we propose a user scheduling algorithm to schedule users satisfying an instantaneous minimum secrecy rate constraint. Based on this, we optimize the bandwidth allocations with three algorithms namely iterative search (IvS), GNN-based supervised learning (GNN-SL), and GNN-based unsupervised learning (GNN-USL). We present a computational complexity analysis which shows that GNN-SL and GNN-USL can be more efficient compared to IvS which is limited by the bandwidth block size. Numerical simulation results highlight that our proposed GNN-based resource allocations can achieve a comparable sum secrecy rate compared to IvS with significantly lower computational complexity. Furthermore, we observe that the GNN approach is more robust to uncertainties in the eavesdropper's channel state information, especially compared with the best channel allocation scheme.
Geometry plays a significant role in monocular 3D object detection. It can be used to estimate object depth by using the perspective projection between object's physical size and 2D projection in the image plane, which can introduce mathematical priors into deep models. However, this projection process also introduces error amplification, where the error of the estimated height is amplified and reflected into the projected depth. It leads to unreliable depth inferences and also impairs training stability. To tackle this problem, we propose a novel Geometry Uncertainty Propagation Network (GUPNet++) by modeling geometry projection in a probabilistic manner. This ensures depth predictions are well-bounded and associated with a reasonable uncertainty. The significance of introducing such geometric uncertainty is two-fold: (1). It models the uncertainty propagation relationship of the geometry projection during training, improving the stability and efficiency of the end-to-end model learning. (2). It can be derived to a highly reliable confidence to indicate the quality of the 3D detection result, enabling more reliable detection inference. Experiments show that the proposed approach not only obtains (state-of-the-art) SOTA performance in image-based monocular 3D detection but also demonstrates superiority in efficacy with a simplified framework.
Molecular docking, a pivotal computational tool for drug discovery, predicts the binding interactions between small molecules (ligands) and target proteins (receptors). Conventional physics-based docking tools, though widely used, face limitations in precision due to restricted conformational sampling and imprecise scoring functions. Recent endeavors have employed deep learning techniques to enhance docking accuracy, but their generalization remains a concern due to limited training data. Leveraging the success of extensive and diverse data in other domains, we introduce HelixDock, a novel approach for site-specific molecular docking. Hundreds of millions of binding poses are generated by traditional docking tools, encompassing diverse protein targets and small molecules. Our deep learning-based docking model, a SE(3)-equivariant network, is pre-trained with this large-scale dataset and then fine-tuned with a small number of precise receptor-ligand complex structures. Comparative analyses against physics-based and deep learning-based baseline methods highlight HelixDock's superiority, especially on challenging test sets. Our study elucidates the scaling laws of the pre-trained molecular docking models, showcasing consistent improvements with increased model parameters and pre-train data quantities. Harnessing the power of extensive and diverse generated data holds promise for advancing AI-driven drug discovery.
This paper investigates the optimization of reconfigurable intelligent surface (RIS) in an integrated sensing and communication (ISAC) system. \red{To meet the demand of growing number of devices, power domain non-orthogonal multiple access (NOMA) is considered. However, traditional NOMA with a large number of devices is challenging due to large decoding delay and propagation error introduced by successive interference cancellation (SIC). Thus, OMA is integrated into NOMA to support more devices}. We formulate a max-min problem to optimize the sensing beampattern \red{with constraints on communication rate}, through joint power allocation, active beamforming and RIS phase shift design. To solve the non-convex problem with a non-smooth objective function, we propose a low complexity alternating optimization (AO) algorithm, where a closed form expression for the intra-cluster power allocation (intra-CPA) is derived, and penalty and successive convex approximation (SCA) methods are used to optimize the beamforming and phase shift design. Simulation results show the effectiveness of the proposed algorithm in terms of improving minimum beampattern gain (MBPG) compared with other baselines. Furthermore, the trade-off between sensing and communication is analyzed and demonstrated in the simulation results.
As an emerging concept, the Metaverse has the potential to revolutionize the social interaction in the post-pandemic era by establishing a digital world for online education, remote healthcare, immersive business, intelligent transportation, and advanced manufacturing. The goal is ambitious, yet the methodologies and technologies to achieve the full vision of the Metaverse remain unclear. In this paper, we first introduce the three infrastructure pillars that lay the foundation of the Metaverse, i.e., human-computer interfaces, sensing and communication systems, and network architectures. Then, we depict the roadmap towards the Metaverse that consists of four stages with different applications. To support diverse applications in the Metaverse, we put forward a novel design methodology: task-oriented design, and further review the challenges and the potential solutions. In the case study, we develop a prototype to illustrate how to synchronize a real-world device and its digital model in the Metaverse by task-oriented design, where a deep reinforcement learning algorithm is adopted to minimize the required communication throughput by optimizing the sampling and prediction systems subject to a synchronization error constraint.
For cyber-physical systems in the 6G era, semantic communications connecting distributed devices for dynamic control and remote state estimation are required to guarantee application-level performance, not merely focus on communication-centric performance. Semantics here is a measure of the usefulness of information transmissions. Semantic-aware transmission scheduling of a large system often involves a large decision-making space, and the optimal policy cannot be obtained by existing algorithms effectively. In this paper, we first investigate the fundamental properties of the optimal semantic-aware scheduling policy and then develop advanced deep reinforcement learning (DRL) algorithms by leveraging the theoretical guidelines. Our numerical results show that the proposed algorithms can substantially reduce training time and enhance training performance compared to benchmark algorithms.
Grant-free random access is promising for massive connectivity with sporadic transmissions in massive machine type communications (mMTC), where the hand-shaking between the access point (AP) and users is skipped, leading to high access efficiency. In grant-free random access, the AP needs to identify the active users and perform channel estimation and signal detection. Conventionally, pilot signals are required for the AP to achieve user activity detection and channel estimation before active user signal detection, which may still result in substantial overhead and latency. In this paper, to further reduce the overhead and latency, we explore the problem of grant-free random access without the use of pilot signals in a millimeter wave (mmWave) multiple input and multiple output (MIMO) system, where the AP performs blind joint user activity detection, channel estimation and signal detection (UACESD). We show that the blind joint UACESD can be formulated as a constrained composite matrix factorization problem, which can be solved by exploiting the structures of the channel matrix and signal matrix. Leveraging our recently developed unitary approximate message passing based matrix factorization (UAMP-MF) algorithm, we design a message passing based Bayesian algorithm to solve the blind joint UACESD problem. Extensive simulation results demonstrate the effectiveness of the blind grant-free random access scheme.
Remote state estimation of large-scale distributed dynamic processes plays an important role in Industry 4.0 applications. In this paper, we focus on the transmission scheduling problem of a remote estimation system. First, we derive some structural properties of the optimal sensor scheduling policy over fading channels. Then, building on these theoretical guidelines, we develop a structure-enhanced deep reinforcement learning (DRL) framework for optimal scheduling of the system to achieve the minimum overall estimation mean-square error (MSE). In particular, we propose a structure-enhanced action selection method, which tends to select actions that obey the policy structure. This explores the action space more effectively and enhances the learning efficiency of DRL agents. Furthermore, we introduce a structure-enhanced loss function to add penalties to actions that do not follow the policy structure. The new loss function guides the DRL to converge to the optimal policy structure quickly. Our numerical experiments illustrate that the proposed structure-enhanced DRL algorithms can save the training time by 50% and reduce the remote estimation MSE by 10% to 25% when compared to benchmark DRL algorithms. In addition, we show that the derived structural properties exist in a wide range of dynamic scheduling problems that go beyond remote state estimation.