Cindy
Abstract:Distributed multiple-input multiple-output (D\mbox{-}MIMO) is a promising technology to realize the promise of massive MIMO gains by fiber-connecting the distributed antenna arrays, thereby overcoming the form factor limitations of co-located MIMO. In this paper, we introduce the concept of mobile D-MIMO (MD-MIMO) network, a further extension of the D-MIMO technology where distributed antenna arrays are connected to the base station with a wireless link allowing all radio network nodes to be mobile. This approach significantly improves deployment flexibility and reduces operating costs, enabling the network to adapt to the highly dynamic nature of next-generation (NextG) networks. We discuss use cases, system design, network architecture, and the key enabling technologies for MD-MIMO. Furthermore, we investigate a case study of MD-MIMO for vehicular networks, presenting detailed performance evaluations for both downlink and uplink. The results show that an MD-MIMO network can provide substantial improvements in network throughput and reliability.
Abstract:In dynamic spectrum access (DSA) networks, secondary users (SUs) need to opportunistically access primary users' (PUs) radio spectrum without causing significant interference. Since the interaction between the SU and the PU systems are limited, deep reinforcement learning (DRL) has been introduced to help SUs to conduct spectrum access. Specifically, deep recurrent Q network (DRQN) has been utilized in DSA networks for SUs to aggregate the information from the recent experiences to make spectrum access decisions. DRQN is notorious for its sample efficiency in the sense that it needs a rather large number of training data samples to tune its parameters which is a computationally demanding task. In our recent work, deep echo state network (DEQN) has been introduced to DSA networks to address the sample efficiency issue of DRQN. In this paper, we analytically show that DEQN comparatively requires less amount of training samples than DRQN to converge to the best policy. Furthermore, we introduce a method to determine the right hyperparameters for the DEQN providing system design guidance for DEQN-based DSA networks. Extensive performance evaluation confirms that DEQN-based DSA strategy is the superior choice with regard to computational power while outperforming DRQN-based DSA strategies.