Recent works on joint communication and sensing (JCAS) cellular networks have proposed to use time division mode (TDM) and concurrent mode (CM), as alternative methods for sharing the resources between communication and sensing signals. While the performance of these JCAS schemes for object tracking and parameter estimation has been studied in previous works, their performance on target detection in the presence of clutter has not been analyzed. In this paper, we propose a detection scheme for estimating the number of targets in JCAS cellular networks that employ TDM or CM resource sharing. The proposed detection method allows for the presence of clutter and/or temporally correlated noise. This scheme is studied with respect to the JCAS trade-off parameters that allow to control the time slots in TDM and the power resources in CM allocated to sensing and communications. The performance of two fundamental transmit beamforming schemes, typical for JCAS, is compared in terms of the receiver operating characteristics curves. Our results indicate that in general the TDM scheme gives a somewhat better detection performance compared to the CM scheme, although both schemes outperform existing approaches provided that their respective trade-off parameters are tuned properly.
We present H2O-Danube-1.8B, a 1.8B language model trained on 1T tokens following the core principles of LLama 2 and Mistral. We leverage and refine various techniques for pre-training large language models. Although our model is trained on significantly fewer total tokens compared to reference models of similar size, it exhibits highly competitive metrics across a multitude of benchmarks. We additionally release a chat model trained with supervised fine-tuning followed by direct preference optimization. We make H2O-Danube-1.8B openly available under Apache 2.0 license further democratizing LLMs to a wider audience economically.
Several previous works have addressed the inherent trade-off between allocating resources in the power and time domains to pilot and data signals in multiple input multiple output systems over block-fading channels. In particular, when the channel changes rapidly in time, channel aging degrades the performance in terms of spectral efficiency without proper pilot spacing and power control. Despite recognizing non-stationary stochastic processes as more accurate models for time-varying wireless channels, the problem of pilot spacing and power control in multi-antenna systems operating over non-stationary channels is not addressed in the literature. In this paper, we address this gap by introducing a refined first-order autoregressive model that exploits the inherent temporal correlations over non-stationary Rician aging channels. We design a multi-frame structure for data transmission that better reflects the non-stationary fading environment than previously developed single-frame structures. Subsequently, to determine optimal pilot spacing and power control within this multi-frame structure, we develop an optimization framework and an efficient algorithm based on maximizing a deterministic equivalent expression for the spectral efficiency, demonstrating its generality by encompassing previous channel aging results. Our numerical results indicate the efficacy of the proposed method in terms of spectral efficiency gains over the single frame structure.
The performance of modern wireless communications systems depends critically on the quality of the available channel state information (CSI) at the transmitter and receiver. Several previous works have proposed concepts and algorithms that help maintain high quality CSI even in the presence of high mobility and channel aging, such as temporal prediction schemes that employ neural networks. However, it is still unclear which neural network-based scheme provides the best performance in terms of prediction quality, training complexity and practical feasibility. To investigate such a question, this paper first provides an overview of state-of-the-art neural networks applicable to channel prediction and compares their performance in terms of prediction quality. Next, a new comparative analysis is proposed for four promising neural networks with different prediction horizons. The well-known tapped delay channel model recommended by the Third Generation Partnership Program is used for a standardized comparison among the neural networks. Based on this comparative evaluation, the advantages and disadvantages of each neural network are discussed and guidelines for selecting the best-suited neural network in channel prediction applications are given.
We consider the problem of gridless blind deconvolution and demixing (GB2D) in scenarios where multiple users communicate messages through multiple unknown channels, and a single base station (BS) collects their contributions. This scenario arises in various communication fields, including wireless communications, the Internet of Things, over-the-air computation, and integrated sensing and communications. In this setup, each user's message is convolved with a multi-path channel formed by several scaled and delayed copies of Dirac spikes. The BS receives a linear combination of the convolved signals, and the goal is to recover the unknown amplitudes, continuous-indexed delays, and transmitted waveforms from a compressed vector of measurements at the BS. However, in the absence of any prior knowledge of the transmitted messages and channels, GB2D is highly challenging and intractable in general. To address this issue, we assume that each user's message follows a distinct modulation scheme living in a known low-dimensional subspace. By exploiting these subspace assumptions and the sparsity of the multipath channels for different users, we transform the nonlinear GB2D problem into a matrix tuple recovery problem from a few linear measurements. To achieve this, we propose a semidefinite programming optimization that exploits the specific low-dimensional structure of the matrix tuple to recover the messages and continuous delays of different communication paths from a single received signal at the BS. Finally, our numerical experiments show that our proposed method effectively recovers all transmitted messages and the continuous delay parameters of the channels with a sufficient number of samples.
Deforestation estimation and fire detection in the Amazon forest poses a significant challenge due to the vast size of the area and the limited accessibility. However, these are crucial problems that lead to severe environmental consequences, including climate change, global warming, and biodiversity loss. To effectively address this problem, multimodal satellite imagery and remote sensing offer a promising solution for estimating deforestation and detecting wildfire in the Amazonia region. This research paper introduces a new curated dataset and a deep learning-based approach to solve these problems using convolutional neural networks (CNNs) and comprehensive data processing techniques. Our dataset includes curated images and diverse channel bands from Sentinel, Landsat, VIIRS, and MODIS satellites. We design the dataset considering different spatial and temporal resolution requirements. Our method successfully achieves high-precision deforestation estimation and burned area detection on unseen images from the region. Our code, models and dataset are open source: https://github.com/h2oai/cvpr-multiearth-deforestation-segmentation
This paper proposes a tensor-based parametric modeling and estimation framework in multiple-input multiple-output (MIMO) systems assisted by intelligent reflecting surfaces (IRSs). We present two algorithms that exploit the tensor structure of the received pilot signal to estimate the concatenated channel. The first one is an iterative solution based on the alternating least squares algorithm. In contrast, the second method provides closed-form estimates of the involved parameters using the high order single value decomposition. Our numerical results show that our proposed tensor-based methods provide improved performance compared to competing state-of-the-art channel estimation schemes, thanks to the exploitation of the algebraic tensor structure of the combined channel without additional computational complexity.
Resource allocation and multiple access schemes are instrumental for the success of communication networks, which facilitate seamless wireless connectivity among a growing population of uncoordinated and non-synchronized users. In this paper, we present a novel random access scheme that addresses one of the most severe barriers of current strategies to achieve massive connectivity and ultra-reliable and low latency communications for 6G. The proposed scheme utilizes wireless channels' angular continuous group-sparsity feature to provide low latency, high reliability, and massive access features in the face of limited time-bandwidth resources, asynchronous transmissions, and preamble errors. Specifically, a reconstruction-free goal-oriented optimization problem is proposed, which preserves the angular information of active devices and is then complemented by a clustering algorithm to assign active users to specific groups. This allows us to identify active stationary devices according to their line of sight angles. Additionally, for mobile devices, an alternating minimization algorithm is proposed to recover their preamble, data, and channel gains simultaneously, enabling the identification of active mobile users. Simulation results show that the proposed algorithm provides excellent performance and supports a massive number of devices. Moreover, the performance of the proposed scheme is independent of the total number of devices, distinguishing it from other random access schemes. The proposed method provides a unified solution to meet the requirements of machine-type communications and ultra-reliable and low-latency communications, making it an important contribution to the emerging 6G networks.
In this letter, we consider an intelligent reflecting surface (IRS)-assisted multiple input multiple output (MIMO) communication and we optimize the joint active and passive beamforming by exploiting the geometrical structure of the propagation channels. Due to the inherent Kronecker product structure of the channel matrix, the global beamforming optimization problem is split into lower dimensional horizontal and vertical sub-problems. Based on this factorization property, we propose two closed-form methods for passive and active beamforming designs, at the IRS, the base station, and user equipment, respectively. The first solution is a singular value decomposition (SVD)-based algorithm independently applied on the factorized channels, while the second method resorts to a third-order rank-one tensor approximation along each domain. Simulation results show that exploiting the channel Kronecker structures yields a significant improvement in terms of computational complexity at the expense of negligible spectral efficiency (SE) loss. We also show that under imperfect channel estimation, the tensor-based solution shows better SE than the benchmark and proposed SVD-based solutions.
The use of in-band full-duplex (FD) enables nodes to simultaneously transmit and receive on the same frequency band, which challenges the traditional assumption in wireless network design. The full-duplex capability enhances spectral efficiency and decreases latency, which are two key drivers pushing the performance expectations of next-generation mobile networks. In less than ten years, in-band FD has advanced from being demonstrated in research labs to being implemented in standards, presenting new opportunities to utilize its foundational concepts. Some of the most significant opportunities include using FD to enable wireless networks to sense the physical environment, integrate sensing and communication applications, develop integrated access and backhaul solutions, and work with smart signal propagation environments powered by reconfigurable intelligent surfaces. However, these new opportunities also come with new challenges for large-scale commercial deployment of FD technology, such as managing self-interference, combating cross-link interference in multi-cell networks, and coexistence of dynamic time division duplex, subband FD and FD networks.