Simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) has emerged as a promising technology to realize full-space coverage and boost spectral efficiency in next-generation wireless networks. Yet, the joint design of the base station precoding matrix as well as the STAR-RIS transmission and reflection coefficient matrices leads to a high-dimensional, strongly nonconvex, and NP-hard optimization problem. Conventional alternating optimization (AO) schemes typically involve repeated large-scale matrix inversion operations, resulting in high computational complexity and poor scalability, while existing deep learning approaches often rely on expensive pre-training and large network models. In this paper, we develop a gradient-based meta learning (GML) framework that directly feeds optimization gradients into lightweight neural networks, thereby removing the need for pre-training and enabling fast adaptation. Specifically, we design dedicated GML-based schemes for both independent-phase and coupled-phase STAR-RIS models, effectively handling their respective amplitude and phase constraints while achieving weighted sum-rate performance very close to that of AO-based benchmarks. Extensive simulations demonstrate that, for both phase models, the proposed methods substantially reduce computational overhead, with complexity growing nearly linearly when the number of BS antennas and STAR-RIS elements grows, and yielding up to 10 times runtime speedup over AO, which confirms the scalability and practicality of the proposed GML method for large-scale STAR-RIS-assisted communications.
Fully digital massive MIMO systems with large numbers (1000+) of antennas offer dramatically increased capacity gains from spatial multiplexing and beamforming. Designing digital receivers that can scale to these array dimensions presents significant challenges regarding both channel estimation overhead and digital computation. This paper presents a computationally efficient and low-overhead receiver design based on long-term beamforming. The method combines finding a low-rank projection from the spatial covariance estimate with a fast polynomial matrix inverse. Ray tracing simulations show minimal loss relative to complete instantaneous beamforming while offering significant overhead and computational gains.




Accurate Angle-of-arrival (AoA) estimation is essential for next-generation wireless communication systems to enable reliable beamforming, high-precision localization, and integrated sensing. Unfortunately, classical high-resolution techniques require multi-element arrays and extensive snapshot collection, while generic Machine Learning (ML) approaches often yield black-box models that lack physical interpretability. To address these limitations, we propose a Symbolic Regression (SR)-based ML framework. Namely, Symbolic Regression-based Angle of Arrival and Beam Pattern Estimator (SABER), a constrained symbolic-regression framework that automatically discovers closed-form beam pattern and AoA models from path loss measurements with interpretability. SABER achieves high accuracy while bridging the gap between opaque ML methods and interpretable physics-driven estimators. First, we validate our approach in a controlled free-space anechoic chamber, showing that both direct inversion of the known $\cos^n$ beam and a low-order polynomial surrogate achieve sub-0.5 degree Mean Absolute Error (MAE). A purely unconstrained SR method can further reduce the error of the predicted angles, but produces complex formulas that lack physical insight. Then, we implement the same SR-learned inversions in a real-world, Reconfigurable Intelligent Surface (RIS)-aided indoor testbed. SABER and unconstrained SR models accurately recover the true AoA with near-zero error. Finally, we benchmark SABER against the Cram\'er-Rao Lower Bounds (CRLBs). Our results demonstrate that SABER is an interpretable and accurate alternative to state-of-the-art and black-box ML-based methods for AoA estimation.




Precoding design based on weighted sum-rate (WSR) maximization is a fundamental problem in downlink multi-user multiple-input multiple-output (MU-MIMO) systems. While the weighted minimum mean-square error (WMMSE) algorithm is a standard solution, its high computational complexity--cubic in the number of base station antennas due to matrix inversions--hinders its application in latency-sensitive scenarios. To address this limitation, we propose a highly parallel algorithm based on a block coordinate descent framework. Our key innovation lies in updating the precoding matrix via block coordinate gradient descent, which avoids matrix inversions and relies solely on matrix multiplications, making it exceptionally amenable to GPU acceleration. We prove that the proposed algorithm converges to a stationary point of the WSR maximization problem. Furthermore, we introduce a two-stage warm-start strategy grounded in the sum mean-square error (MSE) minimization problem to accelerate convergence. We refer to our method as the Accelerated Mixed weighted-unweighted sum-MSE minimization (A-MMMSE) algorithm. Simulation results demonstrate that A-MMMSE matches the WSR performance of both conventional WMMSE and its enhanced variant, reduced-WMMSE, while achieving a substantial reduction in computational time across diverse system configurations.
We study how far a diffusion process on a graph can drift from a designed starting pattern when that pattern is produced using Laplacian regularisation. Under standard stability conditions for undirected, entrywise nonnegative graphs, we give a closed-form, instance-specific upper bound on the steady-state spread, measured as the relative change between the final and initial profiles. The bound separates two effects: (i) an irreducible term determined by the graph's maximum node degree, and (ii) a design-controlled term that shrinks as the regularisation strength increases (following an inverse square-root law). This leads to a simple design rule: given any target limit on spread, one can choose a sufficient regularisation strength in closed form. Although one motivating application is array beamforming, where the initial pattern is the squared magnitude of the beamformer weights, the result applies to any scenario that first enforces Laplacian smoothness and then evolves by linear diffusion on a graph. Overall, the guarantee is non-asymptotic, easy to compute, and certifies how much steady-state deviation can occur.
In this paper, we investigate downlink co-frequency interference (CFI) mitigation in non-geostationary satellites orbits (NGSOs) co-existing systems. Traditional mitigation techniques, such as Zero-forcing (ZF), produce a null towards the direction of arrivals (DOAs) of the interfering signals, but they suffer from high computational complexity due to matrix inversions and required knowledge of the channel state information (CSI). Furthermore, adaptive beamformers, such as sample matrix inversion (SMI)-based minimum variance, provide poor performance when the available snapshots are limited. We propose a Mamba-based beamformer (MambaBF) that leverages an unsupervised deep learning (DL) approach and can be deployed on the user terminal (UT) antenna array, for assisting downlink beamforming and CFI mitigation using only a limited number of available array snapshots as input, and without CSI knowledge. Simulation results demonstrate that MambaBF consistently outperforms conventional beamforming techniques in mitigating interference and maximizing the signal-to-interference-plus-noise ratio (SINR), particularly under challenging conditions characterized by low SINR, limited snapshots, and imperfect CSI.



This paper investigates the use of beyond diagonal reconfigurable intelligent surface (BD-RIS) with $N$ elements to advance integrated sensing and communication (ISAC). We address a key gap in the statistical characterizations of the radar signal-to-noise ratio (SNR) and the communication signal-to-interference-plus-noise ratio (SINR) by deriving tractable closed-form cumulative distribution functions (CDFs) for these metrics. Our approach maximizes the radar SNR by jointly configuring radar beamforming and BD-RIS phase shifts. Subsequently, zero-forcing is adopted to mitigate user interference, enhancing the communication SINR. To meet ISAC outage requirements, we propose an analytically-driven successive non-inversion sampling (SNIS) algorithm for estimating network parameters satisfying network outage constraints. Numerical results illustrate the accuracy of the derived CDFs and demonstrate the effectiveness of the proposed SNIS algorithm.




Traditional ultrasound simulators solve the wave equation to model pressure distribution fields, achieving high accuracy but requiring significant computational time and resources. To address this, ray tracing approaches have been introduced, modeling wave propagation as rays interacting with boundaries and scatterers. However, existing models simplify ray propagation, generating echoes at interaction points without considering return paths to the sensor. This can result in unrealistic artifacts and necessitates careful scene tuning for plausible results. We propose a novel ultrasound simulation pipeline that utilizes a ray tracing algorithm to generate echo data, tracing each ray from the transducer through the scene and back to the sensor. To replicate advanced ultrasound imaging, we introduce a ray emission scheme optimized for plane wave imaging, incorporating delay and steering capabilities. Furthermore, we integrate a standard signal processing pipeline to simulate end-to-end ultrasound image formation. We showcase the efficacy of the proposed pipeline by modeling synthetic scenes featuring highly reflective objects, such as bones. In doing so, our proposed approach, UltraRay, not only enhances the overall visual quality but also improves the realism of the simulated images by accurately capturing secondary reflections and reducing unnatural artifacts. By building on top of a differentiable framework, the proposed pipeline lays the groundwork for a fast and differentiable ultrasound simulation tool necessary for gradient-based optimization, enabling advanced ultrasound beamforming strategies, neural network integration, and accurate inverse scene reconstruction.
Joint Communication and Sensing (JCAS) technology facilitates the seamless integration of communication and sensing functionalities within a unified framework, enhancing spectral efficiency, reducing hardware complexity, and enabling simultaneous data transmission and environmental perception. This paper explores the potential of holographic JCAS systems by leveraging reconfigurable holographic surfaces (RHS) to achieve high-resolution hybrid holographic beamforming while simultaneously sensing the environment. As the holographic transceivers are governed by arbitrary antenna spacing, we first derive exact Cram\'er-Rao Bounds (CRBs) for azimuth and elevation angles to rigorously characterize the three-dimensional (3D) sensing accuracy. To optimize the system performance, we propose a novel weighted multi-objective problem formulation that aims to simultaneously maximize the communication rate and minimize the CRBs. However, this formulation is highly non-convex due to the inverse dependence of the CRB on the optimization variables, making the solution extremely challenging. To address this, we propose a novel algorithmic framework based on the Majorization-Maximization (MM) principle, employing alternating optimization to efficiently solve the problem. The proposed method relies on the closed-form surrogate functions that majorize the original objective derived herein, enabling tractable optimization. Simulation results are presented to validate the effectiveness of the proposed framework under diverse system configurations, demonstrating its potential for next-generation holographic JCAS systems.
This article presents a Non-negative Tensor Factorization based method for sound source separation from Ambisonic microphone signals. The proposed method enables the use of prior knowledge about the Directions-of-Arrival (DOAs) of the sources, incorporated through a constraint on the Spatial Covariance Matrix (SCM) within a Maximum a Posteriori (MAP) framework. Specifically, this article presents a detailed derivation of four algorithms that are based on two types of cost functions, namely the squared Euclidean distance and the Itakura-Saito divergence, which are then combined with two prior probability distributions on the SCM, that is the Wishart and the Inverse Wishart. The experimental evaluation of the baseline Maximum Likelihood (ML) and the proposed MAP methods is primarily based on first-order Ambisonic recordings, using four different source signal datasets, three with musical pieces and one containing speech utterances. We consider under-determined, determined, as well as over-determined scenarios by separating two, four and six sound sources, respectively. Furthermore, we evaluate the proposed algorithms for different spherical harmonic orders and at different reverberation time levels, as well as in non-ideal prior knowledge conditions, for increasingly more corrupted DOAs. Overall, in comparison with beamforming and a state-of-the-art separation technique, as well as the baseline ML methods, the proposed MAP approach offers superior separation performance in a variety of scenarios, as shown by the analysis of the experimental evaluation results, in terms of the standard objective separation measures, such as the SDR, ISR, SIR and SAR.