Abstract:We study a monostatic multiple-input multiple-output sensing scenario assisted by a reconfigurable intelligent surface using tensor signal modeling. We propose a method that exploits the intrinsic multidimensional structure of the received echo signal, allowing us to recast the target sensing problem as a nested tensor-based decomposition problem to jointly estimate the delay, Doppler, and angular information of the target. We derive a two-stage approach based on the alternating least squares algorithm followed by the estimation of the signal parameters via rotational invariance techniques to extract the target parameters. Simulation results show that the proposed tensor-based algorithm yields accurate estimates of the sensing parameters with low complexity.
Abstract:The 5th generation (5G) of wireless systems is being deployed with the aim to provide many sets of wireless communication services, such as low data rates for a massive amount of devices, broadband, low latency, and industrial wireless access. Such an aim is even more complex in the next generation wireless systems (6G) where wireless connectivity is expected to serve any connected intelligent unit, such as software robots and humans interacting in the metaverse, autonomous vehicles, drones, trains, or smart sensors monitoring cities, buildings, and the environment. Because of the wireless devices will be orders of magnitude denser than in 5G cellular systems, and because of their complex quality of service requirements, the access to the wireless spectrum will have to be appropriately shared to avoid congestion, poor quality of service, or unsatisfactory communication delays. Spectrum sharing methods have been the objective of intense study through model-based approaches, such as optimization or game theories. However, these methods may fail when facing the complexity of the communication environments in 5G, 6G, and beyond. Recently, there has been significant interest in the application and development of data-driven methods, namely machine learning methods, to handle the complex operation of spectrum sharing. In this survey, we provide a complete overview of the state-of-theart of machine learning for spectrum sharing. First, we map the most prominent methods that we encounter in spectrum sharing. Then, we show how these machine learning methods are applied to the numerous dimensions and sub-problems of spectrum sharing, such as spectrum sensing, spectrum allocation, spectrum access, and spectrum handoff. We also highlight several open questions and future trends.
Abstract:Contemporary radio access networks employ link adaption (LA) algorithms to optimize the modulation and coding schemes to adapt to the prevailing propagation conditions and are near-optimal in terms of the achieved spectral efficiency. LA is a challenging task in the presence of mobility, fast fading, and imperfect channel quality information and limited knowledge of the receiver characteristics at the transmitter, which render model-based LA algorithms complex and suboptimal. Model-based LA is especially difficult as connected user equipment devices become increasingly heterogeneous in terms of receiver capabilities, antenna configurations and hardware characteristics. Recognizing these difficulties, previous works have proposed reinforcement learning (RL) for LA, which faces deployment difficulties due to their potential negative impacts on live performance. To address this challenge, this paper considers offline RL to learn LA policies from data acquired in live networks with minimal or no intrusive effects on the network operation. We propose three LA designs based on batch-constrained deep Q-learning, conservative Q-learning, and decision transformers, showing that offline RL algorithms can achieve performance of state-of-the-art online RL methods when data is collected with a proper behavioral policy.
Abstract:Integrated sensing and communications (ISAC) is a promising component of 6G networks, fusing communication and radar technologies to facilitate new services. Additionally, the use of extremely large-scale antenna arrays (ELLA) at the ISAC common receiver not only facilitates terahertz-rate communication links but also significantly enhances the accuracy of target detection in radar applications. In practical scenarios, communication scatterers and radar targets often reside in close proximity to the ISAC receiver. This, combined with the use of ELLA, fundamentally alters the electromagnetic characteristics of wireless and radar channels, shifting from far-field planar-wave propagation to near-field spherical wave propagation. Under the far-field planar-wave model, the phase of the array response vector varies linearly with the antenna index. In contrast, in the near-field spherical wave model, this phase relationship becomes nonlinear. This shift presents a fundamental challenge: the widely-used Fourier analysis can no longer be directly applied for target detection and communication channel estimation at the ISAC common receiver. In this work, we propose a feasible solution to address this fundamental issue. Specifically, we demonstrate that there exists a high-dimensional space in which the phase nonlinearity can be expressed as linear. Leveraging this insight, we develop a lifted super-resolution framework that simultaneously performs communication channel estimation and extracts target parameters with high precision.
Abstract:Rs4rs is a web application designed to perform semantic search on recent papers from top conferences and journals related to Recommender Systems. Current scholarly search engine tools like Google Scholar, Semantic Scholar, and ResearchGate often yield broad results that fail to target the most relevant high-quality publications. Moreover, manually visiting individual conference and journal websites is a time-consuming process that primarily supports only syntactic searches. Rs4rs addresses these issues by providing a user-friendly platform where researchers can input their topic of interest and receive a list of recent, relevant papers from top Recommender Systems venues. Utilizing semantic search techniques, Rs4rs ensures that the search results are not only precise and relevant but also comprehensive, capturing papers regardless of variations in wording. This tool significantly enhances research efficiency and accuracy, thereby benefitting the research community and public by facilitating access to high-quality, pertinent academic resources in the field of Recommender Systems. Rs4rs is available at https://rs4rs.com.
Abstract:Recommender systems research lacks standardized benchmarks for reproducibility and algorithm comparisons. We introduce RBoard, a novel framework addressing these challenges by providing a comprehensive platform for benchmarking diverse recommendation tasks, including CTR prediction, Top-N recommendation, and others. RBoard's primary objective is to enable fully reproducible and reusable experiments across these scenarios. The framework evaluates algorithms across multiple datasets within each task, aggregating results for a holistic performance assessment. It implements standardized evaluation protocols, ensuring consistency and comparability. To facilitate reproducibility, all user-provided code can be easily downloaded and executed, allowing researchers to reliably replicate studies and build upon previous work. By offering a unified platform for rigorous, reproducible evaluation across various recommendation scenarios, RBoard aims to accelerate progress in the field and establish a new standard for recommender systems benchmarking in both academia and industry. The platform is available at https://rboard.org and the demo video can be found at https://bit.ly/rboard-demo.
Abstract:We present H2O-Danube3, a series of small language models consisting of H2O-Danube3-4B, trained on 6T tokens and H2O-Danube3-500M, trained on 4T tokens. Our models are pre-trained on high quality Web data consisting of primarily English tokens in three stages with different data mixes before final supervised tuning for chat version. The models exhibit highly competitive metrics across a multitude of academic, chat, and fine-tuning benchmarks. Thanks to its compact architecture, H2O-Danube3 can be efficiently run on a modern smartphone, enabling local inference and rapid processing capabilities even on mobile devices. We make all models openly available under Apache 2.0 license further democratizing LLMs to a wider audience economically.
Abstract:Ensuring smooth mobility management while employing directional beamformed transmissions in 5G millimeter-wave networks calls for robust and accurate user equipment (UE) localization and tracking. In this article, we develop neural network-based positioning models with time- and frequency-domain channel state information (CSI) data in harsh non-line-of-sight (NLoS) conditions. We propose a novel frequency-domain feature extraction, which combines relative phase differences and received powers across resource blocks, and offers robust performance and reliability. Additionally, we exploit the multipath components and propose an aggregate time-domain feature combining time-of-flight, angle-of-arrival and received path-wise powers. Importantly, the temporal correlations are also harnessed in the form of sequence processing neural networks, which prove to be of particular benefit for vehicular UEs. Realistic numerical evaluations in large-scale line-of-sight (LoS)-obstructed urban environment with moving vehicles are provided, building on full ray-tracing based propagation modeling. The results show the robustness of the proposed CSI features in terms of positioning accuracy, and that the proposed models reliably localize UEs even in the absence of a LoS path, clearly outperforming the state-of-the-art with similar or even reduced processing complexity. The proposed sequence-based neural network model is capable of tracking the UE position, speed and heading simultaneously despite the strong uncertainties in the CSI measurements. Finally, it is shown that differences between the training and online inference environments can be efficiently addressed and alleviated through transfer learning.
Abstract:In this work, we consider the matrix completion problem, where the objective is to reconstruct a low-rank matrix from a few observed entries. A commonly employed approach involves nuclear norm minimization. For this method to succeed, the number of observed entries needs to scale at least proportional to both the rank of the ground-truth matrix and the coherence parameter. While the only prior information is oftentimes the low-rank nature of the ground-truth matrix, in various real-world scenarios, additional knowledge about the ground-truth low-rank matrix is available. For instance, in collaborative filtering, Netflix problem, and dynamic channel estimation in wireless communications, we have partial or full knowledge about the signal subspace in advance. Specifically, we are aware of some subspaces that form multiple angles with the column and row spaces of the ground-truth matrix. Leveraging this valuable information has the potential to significantly reduce the required number of observations. To this end, we introduce a multi-weight nuclear norm optimization problem that concurrently promotes the low-rank property as well the information about the available subspaces. The proposed weights are tailored to penalize each angle corresponding to each basis of the prior subspace independently. We further propose an optimal weight selection strategy by minimizing the coherence parameter of the ground-truth matrix, which is equivalent to minimizing the required number of observations. Simulation results validate the advantages of incorporating multiple weights in the completion procedure. Specifically, our proposed multi-weight optimization problem demonstrates a substantial reduction in the required number of observations compared to the state-of-the-art methods.
Abstract:In this work, we address the challenge of accurately obtaining channel state information at the transmitter (CSIT) for frequency division duplexing (FDD) multiple input multiple output systems. Although CSIT is vital for maximizing spatial multiplexing gains, traditional CSIT estimation methods often suffer from impracticality due to the substantial training and feedback overhead they require. To address this challenge, we leverage two sources of prior information simultaneously: the presence of limited local scatterers at the base station (BS) and the time-varying characteristics of the channel. The former results in a redundant angular sparsity of users' channels exceeding the spatial dimension (i.e., the number of BS antennas), while the latter provides a prior non-uniform distribution in the angular domain. We propose a weighted optimization framework that simultaneously reflects both of these features. The optimal weights are then obtained by minimizing the expected recovery error of the optimization problem. This establishes an analytical closed-form relationship between the optimal weights and the angular domain characteristics. Numerical experiments verify the effectiveness of our proposed approach in reducing the recovery error and consequently resulting in decreased training and feedback overhead.