Abstract:In large antenna arrays, hardware power consumption becomes a dominant design constraint, making energy efficiency (EE) a first-class objective alongside spectral efficiency (SE). Microwave linear analog computer (MiLAC)-aided beamforming, whose front end is a passive reciprocal stream-to-antenna network, addresses this tension by reducing the active radio-frequency chain count to the stream number, at a moderate SE cost. Despite this promise, no EE optimization framework has been established for MiLAC-aided beamforming that accounts for digital-to-analog converter quantization noise and post-quantized transmit power. We fill this gap for downlink multiuser multiple-input single-output (MU-MISO) systems by formulating quantization-aware EE maximization over the MiLAC-feasible beamformer and characterizing the resulting SE-EE tradeoff. Three contributions follow. First, we prove a row-space optimality property of the effective MiLAC-aided beamformer, yielding an equivalent reduced-dimension reformulation whose complexity scales with the stream number rather than the antenna number. Second, we develop a low-complexity Dinkelbach-weighted minimum mean-square error algorithm aided by projected gradient descent that is guaranteed to converge to a stationary point. Third, we cast the SE-EE tradeoff as a multi-objective problem and trace its Pareto boundary via a weighted-sum method that combines an alternative reduced-dimension coordinate with auxiliary-variable successive convex approximation, yielding convex per-iteration subproblems with guaranteed convergence. Numerical results on a DeepMIMO v4 deployment show MiLAC-aided beamforming substantially improves EE over digital and hybrid benchmarks at a moderate SE cost and significantly expands the achievable SE-EE operating region.
Abstract:Electrocardiogram (ECG) foundation models represent a paradigm shift from task-specific pipelines to generalizable architectures pre-trained on large-scale unlabeled waveform data. This survey presents a unified and deployment-aware review of foundation models and medical large language models (LLMs) for ECG intelligence in cardiovascular disease (CVD) diagnosis, monitoring, and clinical decision support. The central thesis of this survey paper is that next-generation cardiovascular AI systems will be inherently agentic, requiring the synergistic integration of two complementary model classes: (i) ECG foundation models that act as signal-level interpreters, learning rich electrophysiological representations via self-supervised and multimodal pretraining, and (ii) medical LLMs, trained on biomedical text corpora, that function as knowledge-based reasoning backbones for contextual inference, guideline alignment, and clinical decision support. Thus, the survey systematically reviews existing pool of generalist medical LLMs, as well as ECG foundation models that utilize techniques such as self-supervised learning, multimodal ECG-language alignment, vision transformer architectures, and possess capabilities such as zero-shot classification, automated report generation, and longitudinal risk modeling. Recognizing the constraints of consumer-grade wearable edge devices, we further examine model optimization techniques such as quantization, pruning, knowledge distillation, as well as the role of small language models in enabling low-latency, energy-efficient, and privacy-preserving ECG intelligence on edge platforms such as smartwatches. Finally, we outline future directions in multimodal ECG foundation models, agent-driven monitoring, and explainable, secure edge intelligence, with particular emphasis on real-time, on-device cardiovascular analytics in consumer electronics ecosystems.




Abstract:With the rapid expansion of low Earth orbit (LEO) constellations, thousands of satellites are now in operation, many equipped with onboard GNSS receivers capable of continuous orbit determination and time synchronization. This development is creating an unprecedented spaceborne GNSS network, offering new opportunities for network-driven precise LEO orbit and clock estimation. Yet, current onboard GNSS processing is largely standalone and often insufficient for high-precision applications, while centralized fusion is challenging due to computational bottlenecks and the lack of in-orbit infrastructure. In this work, we report a decentralized GNSS network over large-scale LEO constellations, where each satellite processes its own measurements while exchanging compact information with neighboring nodes to enable precise orbit and time determination. We model the moving constellation as a dynamic graph and tailor a momentum-accelerated gradient tracking (GT) method to ensure steady convergence despite topology changes. Numerical simulations with constellations containing hundreds of satellites show that the proposed method matches the accuracy of an ideal centralized benchmark, while substantially reducing communication burdens. Ultimately, this framework supports the development of autonomous and self-organizing space systems, enabling high-precision navigation with reduced dependence on continuous ground contact.




Abstract:Network-based Global Navigation Satellite Systems (GNSS) underpin critical infrastructure and autonomous systems, yet typically rely on centralized processing hubs that limit scalability, resilience, and latency. Here we report a global-scale, decentralized GNSS architecture spanning hundreds of ground stations. By modeling the receiver network as a time-varying graph, we employ a deep linear neural network approach to learn topology-aware mixing schedules that optimize information exchange. This enables a gradient tracking diffusion strategy wherein stations execute local inference and exchange succinct messages to achieve two concurrent objectives: centimeter-level self-localization and network-wide consensus on satellite correction products. The consensus products are broadcast to user receivers as corrections, supporting precise point positioning (PPP) and precise point positioning-real-time kinematic (PPP-RTK). Numerical results demonstrate that our method matches the accuracy of centralized baselines while significantly outperforming existing decentralized methods in convergence speed and communication overhead. By reframing decentralized GNSS as a networked signal processing problem, our results pave the way for integrating decentralized optimization, consensus-based inference, and graph-aware learning as effective tools in operational satellite navigation.
Abstract:Inter-satellite-link-enabled low-Earth-orbit (LEO) satellite constellations are evolving toward networked architectures that support constellation-level cooperation, enabling multiple satellites to jointly serve user terminals through cooperative beamforming. While such cooperation can substantially enhance link budgets and achievable rates, its practical realization is challenged by the scalability limitations of centralized beamforming designs and the stringent computational and signaling constraints of large LEO constellations. This paper develops a fully decentralized cooperative beamforming framework for networked LEO satellite downlinks. Using an ergodic-rate-based formulation, we first derive a centralized weighted minimum mean squared error (WMMSE) solution as a performance benchmark. Building on this formulation, we propose a topology-agnostic decentralized beamforming algorithm by localizing the benchmark and exchanging a set of globally coupled variables whose dimensions are independent of the antenna number and enforcing consensus over arbitrary connected inter-satellite networks. The resulting algorithm admits fully parallel execution across satellites. To further enhance scalability, we eliminate the consensus-related auxiliary variables in closed form and derive a low-complexity per-satellite update rule that is optimal to local iteration and admits a quasi-closed-form solution via scalar line search. Simulation results show that the proposed decentralized schemes closely approach centralized performance under practical inter-satellite topologies, while significantly reducing computational complexity and signaling overhead, enabling scalable cooperative beamforming for large LEO constellations.
Abstract:Low Earth orbit (LEO) satellites are a crucial component of the future non-terrestrial networks (NTN) due to lower latency, robust signal strengths, shorter revisit times, and dense constellations. However, acquiring reliable channel state information (CSI) in LEO satellite communication remains challenging owing to severe signal attenuation over long propagation distances and short coherence times. Despite these challenges, LEO channels benefit from pronounced line-of-sight dominance and geometric properties inherently tied to positioning information. In this work, we propose an integrated positioning and communication (IPAC) framework for multi-LEO satellite networks to address the unique challenges posed by LEO channels. Specifically, we leverage in-the-loop LEO positioning to exploit users' position information for improving uplink CSI acquisition. To overcome the link-budget limitations of single-satellite systems, cooperative multi-LEO uplink data detection is adopted. By exploiting the different coherent timescales of position-related parameters and random channel gains, we develop a dual-timescale Kalman filter-based IPAC framework: an unscented Kalman filter (UKF) for tracking users' position and velocity in the large-timescale, and a Kalman filter that leverages the position information obtained in the large-timescale for improved data-aided uplink channel estimation in the small-timescale. Finally, the two tasks of channel estimation and cooperative data detection are jointly addressed through the expectation maximization (EM) algorithm. Numerical results demonstrate that the proposed IPAC approach outperforms the conventional baseline in terms of channel estimation accuracy and communication performance.




Abstract:Low Earth orbit (LEO) satellites offer a promising alternative to global navigation satellite systems for precise positioning; however, their relatively low altitudes make them more susceptible to orbital perturbations, which in turn degrade positioning accuracy. In this work, we study LEO-based positioning under orbital errors within a signal-of-opportunity framework. First, we introduce a LEO orbit model that accounts for Earth's non-sphericity and derive a wideband communication model that captures fast- and slow-time Doppler effects and multipath propagation. Subsequently, we perform a misspecified Cramér-Rao bound (MCRB) analysis to evaluate the impact of orbital errors on positioning performance. Then, we propose a two-stage positioning method starting with a (i) MCRB-based weighted orbit calibration, followed by (ii) least-squares user positioning using the corrected orbit. The MCRB analysis indicates that orbital errors can induce kilometer-level position biases. Extensive simulations show that the proposed estimator can considerably enhance the positioning accuracy relative to the orbit-mismatched baseline, yielding errors on the order of a few meters.




Abstract:The current body of research on Parkinson's disease (PD) screening, monitoring, and management has evolved along two largely independent trajectories. The first research community focuses on multimodal sensing of PD-related biomarkers using noninvasive technologies such as inertial measurement units (IMUs), force/pressure insoles, electromyography (EMG), electroencephalography (EEG), speech and acoustic analysis, and RGB/RGB-D motion capture systems. These studies emphasize data acquisition, feature extraction, and machine learning-based classification for PD screening, diagnosis, and disease progression modeling. In parallel, a second research community has concentrated on robotic intervention and rehabilitation, employing socially assistive robots (SARs), robot-assisted rehabilitation (RAR) systems, and virtual reality (VR)-integrated robotic platforms for improving motor and cognitive function, enhancing social engagement, and supporting caregivers. Despite the complementary goals of these two domains, their methodological and technological integration remains limited, with minimal data-level or decision-level coupling between the two. With the advent of advanced artificial intelligence (AI), including large language models (LLMs), agentic AI systems, a unique opportunity now exists to unify these research streams. We envision a closed-loop sensor-AI-robot framework in which multimodal sensing continuously guides the interaction between the patient, caregiver, humanoid robot (and physician) through AI agents that are powered by a multitude of AI models such as robotic and wearables foundation models, LLM-based reasoning, reinforcement learning, and continual learning. Such closed-loop system enables personalized, explainable, and context-aware intervention, forming the basis for digital twin of the PD patient that can adapt over time to deliver intelligent, patient-centered PD care.


Abstract:Reconfigurable intelligent surface (RIS)-aided terahertz (THz)-band communications are promising enablers for future wireless networks. However, array densification at high frequencies introduces significant challenges in accurate channel modeling and estimation, particularly with THz-specific fading, mutual coupling (MC), spatial correlation, and near-field effects. In this work, we model THz outdoor small-scale fading channels using the mixture gamma (MG) distribution, considering absorption losses, spherical wave propagation, MC, and spatial correlation across large base stations and RISs. We derive the distribution of the cascaded RIS-aided channel and investigate linear channel estimation techniques, analyzing the impact of various channel parameters. Numerical results based on precise THz parameters reveal that accounting for spatial correlation, MC, and near-field modeling substantially enhances estimation accuracy, especially in ultra-massive arrays and short-range scenarios. These results underscore the importance of incorporating these effects for precise, physically consistent channel modeling.




Abstract:The integration of pattern-reconfigurable antennas into hybrid multiple-input multiple-output (MIMO) architectures presents a promising path toward high-efficiency and low-cost transceiver solutions. Pattern-reconfigurable antennas can dynamically steer per-antenna radiation patterns, enabling more efficient power utilization and interference suppression. In this work, we study a tri-hybrid MIMO architecture for multi-user communication that integrates digital, analog, and antenna-domain precoding using pattern-reconfigurable antennas. For characterizing the reconfigurability of antenna radiation patterns, we develop two models -- Model~I and Model~II. Model~I captures realistic hardware constraints through limited pattern selection, while Model~II explores the performance upper bound by assuming arbitrary pattern generation. Based on these models, we develop two corresponding tri-hybrid precoding algorithms grounded in the weighted minimum mean square error (WMMSE) framework, which alternately optimize the digital, analog, and antenna precoders under practical per-antenna power constraints. Realistic simulations conducted in ray-tracing generated environments are utilized to evaluate the proposed system and algorithms. The results demonstrate the significant potential of the considered tri-hybrid architecture in enhancing communication performance and hardware efficiency. However, they also reveal that the existing hardware is not yet capable of fully realizing these performance gains, underscoring the need for joint progress in antenna design and communication theory development.