Abstract:Fully harvesting the gain of multiple-input and multiple-output (MIMO) requires accurate channel information. However, conventional channel acquisition methods mainly rely on pilot training signals, resulting in significant training overheads (time, energy, spectrum). Digital twin-aided communications have been proposed in [1] to reduce or eliminate this overhead by approximating the real world with a digital replica. However, how to implement a digital twin-aided communication system brings new challenges. In particular, how to model the 3D environment and the associated EM properties, as well as how to update the environment dynamics in a coherent manner. To address these challenges, motivated by the latest advancements in computer vision, 3D reconstruction and neural radiance field, we propose an end-to-end deep learning framework for future generation wireless systems that can reconstruct the 3D EM field covered by a wireless access point, based on widely available crowd-sourced world-locked wireless samples between the access point and the devices. This visionary framework is grounded in classical EM theory and employs deep learning models to learn the EM properties and interaction behaviors of the objects in the environment. Simulation results demonstrate that the proposed learnable digital twin can implicitly learn the EM properties of the objects, accurately predict wireless channels, and generalize to changes in the environment, highlighting the prospect of this novel direction for future generation wireless platforms.
Abstract:Accurate localization is crucial for various applications, including autonomous vehicles and next-generation wireless networks. However, the reliability and precision of Global Navigation Satellite Systems (GNSS), such as the Global Positioning System (GPS), are compromised by multi-path errors and non-line-of-sight scenarios. This paper presents a novel approach to enhance GPS accuracy by combining visual data from RGB cameras with wireless signals captured at millimeter-wave (mmWave) and sub-terahertz (sub-THz) basestations. We propose a sensing-aided framework for (i) site-specific GPS data characterization and (ii) GPS position de-noising that utilizes multi-modal visual and wireless information. Our approach is validated in a realistic Vehicle-to-Infrastructure (V2I) scenario using a comprehensive real-world dataset, demonstrating a substantial reduction in localization error to sub-meter levels. This method represents a significant advancement in achieving precise localization, particularly beneficial for high-mobility applications in 5G and beyond networks.
Abstract:High data rate and low-latency vehicle-to-vehicle (V2V) communication are essential for future intelligent transport systems to enable coordination, enhance safety, and support distributed computing and intelligence requirements. Developing effective communication strategies, however, demands realistic test scenarios and datasets. This is important at the high-frequency bands where more spectrum is available, yet harvesting this bandwidth is challenged by the need for direction transmission and the sensitivity of signal propagation to blockages. This work presents the first large-scale multi-modal dataset for studying mmWave vehicle-to-vehicle communications. It presents a two-vehicle testbed that comprises data from a 360-degree camera, four radars, four 60 GHz phased arrays, a 3D lidar, and two precise GPSs. The dataset contains vehicles driving during the day and night for 120 km in intercity and rural settings, with speeds up to 100 km per hour. More than one million objects were detected across all images, from trucks to bicycles. This work further includes detailed dataset statistics that prove the coverage of various situations and highlights how this dataset can enable novel machine-learning applications.
Abstract:Reconfigurable intelligent surfaces (RISs) are envisioned to play a key role in future wireless communication networks. However, channel estimation in RIS-aided wireless networks is challenging due to their passive nature and the large number of reflective elements, leading to high channel estimation overhead. Additionally, conventional methods like beam sweeping, which do not rely on explicit channel state information, often struggle in managing interference in multi-user networks. In this paper, we propose a novel approach that leverages digital twins (DTs) of the physical environments to approximate channels using electromagnetic 3D models and ray tracing, thus relaxing the need for channel estimation and extensive over-the-air computations in RIS-aided wireless networks. To address the digital twins channel approximation errors, we further refine this approach with a DT-specific robust transmission design that reliably meets minimum desired rates. The results show that our method secures these rates over 90% of the time, significantly outperforming beam sweeping, which achieves these rates less than 8% of the time due to its poor management of transmitting power and interference.
Abstract:In frequency-division duplexing (FDD) multiple-input multiple-output (MIMO) systems, obtaining accurate downlink channel state information (CSI) for precoding is vastly challenging due to the tremendous feedback overhead with the growing number of antennas. Utilizing uplink pilots for downlink CSI estimation is a promising approach that can eliminate CSI feedback. However, the downlink CSI estimation accuracy diminishes significantly as the number of channel paths increases, resulting in reduced spectral efficiency. In this paper, we demonstrate that achieving downlink spectral efficiency comparable to perfect CSI is feasible by combining uplink CSI with limited downlink CSI feedback information. Our proposed downlink CSI feedback strategy transmits quantized phase information of downlink channel paths, deviating from conventional limited methods. We put forth a mean square error (MSE)-optimal downlink channel reconstruction method by jointly exploiting the uplink CSI and the limited downlink CSI. Armed with the MSE-optimal estimator, we derive the MSE as a function of the number of feedback bits for phase quantization. Subsequently, we present an optimal feedback bit allocation method for minimizing the MSE in the reconstructed channel through phase quantization. Utilizing a robust downlink precoding technique, we establish that the proposed downlink channel reconstruction method is sufficient for attaining a sum-spectral efficiency comparable to perfect CSI.
Abstract:Compressive sensing is a promising solution for the channel estimation in multiple-input multiple-output (MIMO) systems with large antenna arrays and constrained hardware. Utilizing site-specific channel data from real-world systems, deep learning can be employed to learn the compressive sensing measurement vectors with minimum redundancy, thereby focusing sensing power on promising spatial directions of the channel. Collecting real-world channel data, however, is challenging due to the high overhead resulting from the large number of antennas and hardware constraints. In this paper, we propose leveraging a site-specific digital twin to generate synthetic channel data, which shares a similar distribution with real-world data. The synthetic data is then used to train the deep learning models for learning measurement vectors and hybrid precoder/combiner design in an end-to-end manner. We further propose a model refinement approach to fine-tune the model pre-trained on the digital twin data with a small amount of real-world data. The evaluation results show that, by training the model on the digital twin data, the learned measurement vectors can be efficiently adapted to the environment geometry, leading to high performance of hybrid precoding for real-world deployments. Moreover, the model refinement approach can enable the digital twin aided model to achieve comparable performance to the model trained on the real-world dataset with a significantly reduced amount of real-world data.
Abstract:Digital twins (DTs), which are virtual environments that simulate, predict, and optimize the performance of their physical counterparts, are envisioned to be essential technologies for advancing next-generation wireless networks. While DTs have been studied extensively for wireless networks, their use in conjunction with autonomous vehicles with programmable mobility remains relatively under-explored. In this paper, we study DTs used as a development environment to design, deploy, and test artificial intelligence (AI) techniques that use real-time observations, e.g. radio key performance indicators, for vehicle trajectory and network optimization decisions in an autonomous vehicle networks (AVN). We first compare and contrast the use of simulation, digital twin (software in the loop (SITL)), sandbox (hardware-in-the-loop (HITL)), and physical testbed environments for their suitability in developing and testing AI algorithms for AVNs. We then review various representative use cases of DTs for AVN scenarios. Finally, we provide an example from the NSF AERPAW platform where a DT is used to develop and test AI-aided solutions for autonomous unmanned aerial vehicles for localizing a signal source based solely on link quality measurements. Our results in the physical testbed show that SITL DTs, when supplemented with data from real-world (RW) measurements and simulations, can serve as an ideal environment for developing and testing innovative AI solutions for AVNs.
Abstract:Beam alignment (BA) in modern millimeter wave standards such as 5G NR and WiGig (802.11ay) is based on exhaustive and/or hierarchical beam searches over pre-defined codebooks of wide and narrow beams. This approach is slow and bandwidth/power-intensive, and is a considerable hindrance to the wide deployment of millimeter wave bands. A new approach is needed as we move towards 6G. BA is a promising use case for deep learning (DL) in the 6G air interface, offering the possibility of automated custom tuning of the BA procedure for each cell based on its unique propagation environment and user equipment (UE) location patterns. We overview and advocate for such an approach in this paper, which we term site-specific beam alignment (SSBA). SSBA largely eliminates wasteful searches and allows UEs to be found much more quickly and reliably, without many of the drawbacks of other machine learning-aided approaches. We first overview and demonstrate new results on SSBA, then identify the key open challenges facing SSBA.
Abstract:Localization in outdoor wireless systems typically requires transmitting specific reference signals to estimate distance (trilateration methods) or angle (triangulation methods). These cause overhead on communication, need a LoS link to work well, and require multiple base stations, often imposing synchronization or specific hardware requirements. Fingerprinting has none of these drawbacks, but building its database requires high human effort to collect real-world measurements. For a long time, this issue limited the size of databases and thus their performance. This work proposes significantly reducing human effort in building fingerprinting databases by populating them with \textit{digital twin RF maps}. These RF maps are built from ray-tracing simulations on a digital replica of the environment across several frequency bands and beamforming configurations. Online user fingerprints are then matched against this spatial database. The approach was evaluated with practical simulations using realistic propagation models and user measurements. Our experiments show sub-meter localization errors on a NLoS location 95\% of the time using sensible user measurement report sizes. Results highlight the promising potential of the proposed digital twin approach for ubiquitous wide-area 6G localization.
Abstract:Deep learning (DL) approaches have demonstrated high performance in compressing and reconstructing the channel state information (CSI) and reducing the CSI feedback overhead in massive MIMO systems. One key challenge, however, with the DL approaches is the demand for extensive training data. Collecting this real-world CSI data incurs significant overhead that hinders the DL approaches from scaling to a large number of communication sites. To address this challenge, we propose a novel direction that utilizes site-specific \textit{digital twins} to aid the training of DL models. The proposed digital twin approach generates site-specific synthetic CSI data from the EM 3D model and ray tracing, which can then be used to train the DL model without real-world data collection. To further improve the performance, we adopt online data selection to refine the DL model training with a small real-world CSI dataset. Results show that a DL model trained solely on the digital twin data can achieve high performance when tested in a real-world deployment. Further, leveraging domain adaptation techniques, the proposed approach requires orders of magnitude less real-world data to approach the same performance of the model trained completely on a real-world CSI dataset.