TU Wien
Abstract:Machine learning for mobile network analysis, planning, and optimization is often limited by the lack of large, comprehensive real-world datasets. This paper introduces the Vienna 4G/5G Drive-Test Dataset, a city-scale open dataset of georeferenced Long Term Evolution (LTE) and 5G New Radio (NR) measurements collected across Vienna, Austria. The dataset combines passive wideband scanner observations with active handset logs, providing complementary network-side and user-side views of deployed radio access networks. The measurements cover diverse urban and suburban settings and are aligned with time and location information to support consistent evaluation. For a representative subset of base stations (BSs), we provide inferred deployment descriptors, including estimated BS locations, sector azimuths, and antenna heights. The release further includes high-resolution building and terrain models, enabling geometry-conditioned learning and calibration of deterministic approaches such as ray tracing. To facilitate practical reuse, the data are organized into scanner, handset, estimated cell information, and city-model components, and the accompanying documentation describes the available fields and intended joins between them. The dataset enables reproducible benchmarking across environment-aware learning, propagation modeling, coverage analysis, and ray-tracing calibration workflows.
Abstract:Ensuring user fairness in wireless communications is a fundamental challenge, as balancing the trade-off between fairness and sum rate leads to a non-convex, multi-objective optimization whose complexity grows with network scale. To alleviate this conflict, we propose an optimization-based unsupervised learning approach based on the wireless transformer (WiT) architecture that learns from channel state information (CSI) features. We reformulate the trade-off by combining the sum rate and fairness objectives through a Lagrangian multiplier, which is updated automatically via a dual-ascent algorithm. This mechanism allows for a controllable fairness constraint while simultaneously maximizing the sum rate, effectively realizing a trace on the Pareto front between two conflicting objectives. Our findings show that the proposed approach offers a flexible solution for managing the trade-off optimization under prescribed fairness.
Abstract:Future wireless communications will rely on multiple-input multiple-output (MIMO) beamforming operating at millimeter wave (mmWave) frequency bands to deliver high data rates. To support flexible spatial processing and meet the demands of latency critical applications, it is essential to use fully digital mmWave MIMO beamforming, which relies on accurate channel estimation. However, ensuring power efficiency in fully digital mmWave MIMO systems requires the use of low-resolution digital-to-analog converters (DACs) and analog-to-digital converters (ADCs). The reduced resolution of these quantizers introduces distortion in both transmitted and received signals, ultimately degrading system performance. In this paper, we investigate the channel estimation performance of mmWave MIMO systems employing fully digital beamforming with low-resolution quantization, under practical system constraints. We evaluate the system performance in terms of spectral efficiency (SE) and energy efficiency (EE). Simulation results demonstrate that a moderate quantization resolutions of 4-bit per DAC/ADC offers a favorable trade-off between energy consumption and achievable data rate.




Abstract:Future wireless multiple-input multiple-output (MIMO) systems will integrate both sub-6 GHz and millimeter wave (mmWave) frequency bands to meet the growing demands for high data rates. MIMO link establishment typically requires accurate channel estimation, which is particularly challenging at mmWave frequencies due to the low signal-to-noise ratio (SNR). In this paper, we propose two novel deep learning-based methods for estimating mmWave MIMO channels by leveraging out-of-band information from the sub-6 GHz band. The first method employs a convolutional neural network (CNN), while the second method utilizes a UNet architecture. We compare these proposed methods against deep-learning methods that rely solely on in-band information and with other state-of-the-art out-of-band aided methods. Simulation results show that our proposed out-of-band aided deep-learning methods outperform existing alternatives in terms of achievable spectral efficiency.
Abstract:Distinguishing between reconfigurable intelligent surface (RIS) assisted paths and non-line-of-sight (NLOS) paths is a fundamental problem for RIS-assisted integrated sensing and communication. In this work, we propose a pattern alternation scheme for the RIS response that uses part of the RIS as a dynamic part to modulate the estimated channel power, which can considerably help the user equipments (UEs) to identify the RIS-assisted paths. Under such a dynamic setup, we formulate the detection framework for a single UE, where we develop a statistical model of the estimated channel power, allowing us to analytically evaluate the performance of the system. We investigate our method under two critical factors: the number of RIS elements allocated for the dynamic part and the allocation of RIS elements among different users. Simulation results verify the accuracy of our analysis.




Abstract:Integrated sensing and communication (ISAC) is envisioned be to one of the paradigms upon which next-generation mobile networks will be built, extending localization and tracking capabilities, as well as giving birth to environment-aware wireless access. A key aspect of sensing integration is parameter estimation, which involves extracting information about the surrounding environment, such as the direction, distance, and velocity of various objects within. This is typically of a high-dimensional nature, which leads to significant computational complexity, if performed jointly across multiple sensing dimensions, such as space, frequency, and time. Additionally, due to the incorporation of sensing on top of the data transmission, the time window available for sensing is likely to be short, resulting in an estimation problem where only a single snapshot is accessible. In this work, we propose PLAIN, a tensor-based estimation architecture that flexibly scales with multiple sensing dimensions and can handle high dimensionality, limited measurement time, and super-resolution requirements. It consists of three stages: a compression stage, where the high dimensional input is converted into lower dimensionality, without sacrificing resolution; a decoupled estimation stage, where the parameters across the different dimensions are estimated in parallel with low complexity; an input-based fusion stage, where the decoupled parameters are fused together to form a paired multidimensional estimate. We investigate the performance of the architecture for different configurations and compare it against practical sequential and joint estimation baselines, as well as theoretical bounds. Our results show that PLAIN, using tools from tensor algebra, subspace-based processing, and compressed sensing, can scale flexibly with dimensionality, while operating with low complexity and maintaining super-resolution.




Abstract:In this study, we elaborate on the concept of scalable anomalous reflector (AR) to analyze the angular response, frequency response, and spatial scalability of a designed AR across a broad range of angles and frequencies. We utilize theoretical models and ray tracing simulations to investigate the communication performance of two different-sized scalable finite ARs, one smaller configuration with 48 x 48 array of unit cells and the other constructed by combining four smaller ARs to form a larger array with 96 x 96 unit cells. To validate the developed theoretical approach, we conducted measurements in an auditorium to evaluate the received power through an AR link at different angles and frequencies. In addition, models of scalable deflectors are implemented in the MATLAB ray tracer to simulate the measurement scenario. The results from theoretical calculations and ray tracing simulations achieve good agreement with measurement results.




Abstract:In this paper, we systematically study the electromagnetic (EM) and communication aspects of an RIS through EM simulations, system-level and ray-tracing simulations, and finally measurements. We simulate a nearly perfect, lossless RIS, and a realistic lossy anomalous reflector (AR) in different ray tracers and analyze the large-scale fading of simple RIS-assisted links. We also compare the results with continuous and quantized unit cell reflection phases with one to four-bit resolutions. Finally, we perform over-the-air communication link measurements in an indoor setting with a manufactured sample of a wide-angle AR. The EM, system-level, and ray-tracing simulation results show good agreement with the measurement results. It is proved that the introduced macroscopic model of RIS from the EM aspects is consistent with our proposed communication models, both for an ideal RIS and a realistic AR.




Abstract:In this work, we present a wireless localization method that operates on self-supervised and unlabeled channel estimates. Our self-supervising method learns general-purpose channel features robust to fading and system impairments. Learned representations are easily transferable to new environments and ready to use for other wireless downstream tasks. To the best of our knowledge, the proposed method is the first joint-embedding self-supervised approach to forsake the dependency on contrastive channel estimates. Our approach outperforms fully-supervised techniques in small data regimes under fine-tuning and, in some cases, linear evaluation. We assess the performance in centralized and distributed massive MIMO systems for multiple datasets. Moreover, our method works indoors and outdoors without additional assumptions or design changes.




Abstract:Deep neural networks (DNNs) have become a popular approach for wireless localization based on channel state information (CSI). A common practice is to use the raw CSI in the input and allow the network to learn relevant channel representations for mapping to location information. However, various works show that raw CSI can be very sensitive to system impairments and small changes in the environment. On the contrary, hand-designing features may hinder the limits of channel representation learning of the DNN. In this work, we propose attention-based CSI for robust feature learning. We evaluate the performance of attended features in centralized and distributed massive MIMO systems for ray-tracing channels in two non-stationary railway track environments. By comparison to a base DNN, our approach provides exceptional performance.