Abstract:Classical information theory typically assumes reliable receiver-side processing. We study remote inference when communication is noisy and the receiver itself is built from unreliable components under a finite redundancy budget. Under a committed/no-bypass receiver closure, task-relevant information can affect the final estimate only by passing through a budgeted collection of vulnerable primitives unless an explicit protected bypass is modeled. Modeling each vulnerable primitive as a memoryless noisy channel yields a baseline supply--demand converse: the task-relevant information needed to attain a target distortion cannot exceed the smaller of the total information supplied by the communication channel and the total information supplied by the vulnerable compute budget. Our main converse shows that committed intermediate interfaces create additional first-order serial cuts and receiver-internal computation-graph cuts, captured in general by a receiver-internal compute min-cut converse. In particular, the twofold loss in the symmetric two-stage hard-separation special case is not inherent to unreliable receiver computation but induced by hard-separation under the committed/no-bypass closure. This extra first-order tax is therefore closure-dependent rather than universal. On the converse side, if downstream modules retain soft visibility to the raw channel output, the converse reduces to the single-bottleneck supply, up to any explicitly reserved soft-path budget. Under a separate stronger protected-support closure with reliable decoder and control support, we establish achievability results for task-direct and serial hard-separation constructions. For the fully noisy-logic regime, we obtain only a conservative depth-dependent converse, and matched achievability remains open.
Abstract:Tone injection (TI) is a promising distortionless PAPR reduction technique that incurs no spectral efficiency loss. However, state-of-the-art TI schemes based on random candidate generation or clipping noise spectrum suffer from fundamental limitations in PAPR performance. In this paper, we propose novel TI schemes compatible with both OFDM and AFDM systems. The proposed schemes iteratively update the TI sequence via a candidate ranking procedure guided by time-domain local peaks. This accurately selects effective candidates while achieving a complexity comparable to that of the fast Fourier transform. Depth-first search is further integrated to enhance PAPR performance by exploiting the tree structure of the process. Simulations demonstrate that the proposed schemes achieve over 1 dB PAPR gain over baseline TI schemes at comparable complexity. The gain is consistent across various numbers of subcarriers under controlled per-iteration complexities, confirming a superior performance-complexity trade-off for both OFDM and AFDM.
Abstract:This paper studies a feedback driven configuration tuning framework for adaptive sensing feedback in Integrated Sensing and Communication (ISAC) systems. We propose a framework in which the User Equipment (UE) adapts sensing parameters under dynamic conditions while satisfying network defined constraints. The problem is formulated as a stochastic constrained optimization problem, to improve sensing reliability and latency. We consider a bistatic ISAC sensing feedback setup and instantiate the framework via threshold optimization as a representative case study, enabling benchmarking against baseline methods. To ensure efficiency under UE computational limits, we propose Ranking Aware, Constrained, and Efficient CMAES (RACE CMA), which integrates two stage racing, common random numbers, noise aware ranking, and feasible constraint handling. Results show that the proposed approach improves sensing reliability by about 35 percent while reducing computational cost by about 25 percent, yielding roughly a twofold gain in performance cost efficiency. This highlights that UE side configuration tuning is a promising mechanism for enhancing closed loop ISAC performance under practical system constraints.
Abstract:The performance of constructive interference precoding (CIP) for multi-user multi-antenna (MU-MIMO) systems is governed by the structure of the constructive interference (CI) regions, yet this is overlooked in conventional constellation design. This work proposes the region-based constellation (RBC) model to lay the foundation for CIP constellation design. An RBC directly defines the mapping between messages and their feasible regions, instead of deriving them from an existing constellation. To provide insight for RBC design, we study the limitations of quadrature-amplitude-modulation (QAM)-based CIP. Analytical results show that the restrictive CI regions of QAM symbols are systematically misaligned with the objective-minimising sign pattern, resulting in a significant gap to the theoretical performance limit. From the perspective of improving sign alignment, two novel RBC schemes with non-convex feasible regions are proposed, namely mirrored-ends QAM (ME-QAM) and real-extended ME-QAM. A low-complexity algorithm is also developed for the resulting mixed-integer quadratic program, achieving a complexity comparable to QAM-based CIP. Simulation results with constellation sizes $\{16,64\}$ demonstrate up to $4$~dB signal-to-noise-ratio gain of the proposed schemes over QAM-based CIP. The proposed RBC model is also applicable to other systems with non-bijective modulation, representing a promising direction for future research.
Abstract:Despite remarkable progress, automatic speaker verification (ASV) systems typically lack the transparency required for high-accountability applications. Motivated by how human experts perform forensic speaker comparison (FSC), we propose a speaker verification network with phonetic interpretability, PhiNet, designed to enhance both local and global interpretability by leveraging phonetic evidence in decision-making. For users, PhiNet provides detailed phonetic-level comparisons that enable manual inspection of speaker-specific features and facilitate a more critical evaluation of verification outcomes. For developers, it offers explicit reasoning behind verification decisions, simplifying error tracing and informing hyperparameter selection. In our experiments, we demonstrate PhiNet's interpretability with practical examples, including its application in analyzing the impact of different hyperparameters. We conduct both qualitative and quantitative evaluations of the proposed interpretability methods and assess speaker verification performance across multiple benchmark datasets, including VoxCeleb, SITW, and LibriSpeech. Results show that PhiNet achieves performance comparable to traditional black-box ASV models while offering meaningful, interpretable explanations for its decisions, bridging the gap between ASV and forensic analysis.
Abstract:Deep reinforcement learning (RL) suffers from plasticity loss severely due to the nature of non-stationarity, which impairs the ability to adapt to new data and learn continually. Unfortunately, our understanding of how plasticity loss arises, dissipates, and can be dissolved remains limited to empirical findings, leaving the theoretical end underexplored.To address this gap, we study the plasticity loss problem from the theoretical perspective of network optimization. By formally characterizing the two culprit factors in online RL process: the non-stationarity of data distributions and the non-stationarity of targets induced by bootstrapping, our theory attributes the loss of plasticity to two mechanisms: the rank collapse of the Neural Tangent Kernel (NTK) Gram matrix and the $Θ(\frac{1}{k})$ decay of gradient magnitude. The first mechanism echoes prior empirical findings from the theoretical perspective and sheds light on the effects of existing methods, e.g., network reset, neuron recycle, and noise injection. Against this backdrop, we focus primarily on the second mechanism and aim to alleviate plasticity loss by addressing the gradient attenuation issue, which is orthogonal to existing methods. We propose Sample Weight Decay -- a lightweight method to restore gradient magnitude, as a general remedy to plasticity loss for deep RL methods based on experience replay. In experiments, we evaluate the efficacy of \methodName upon TD3, \myadded{Double DQN} and SAC with SimBa architecture in MuJoCo, \myadded{ALE} and DeepMind Control Suite tasks. The results demonstrate that \methodName effectively alleviates plasticity loss and consistently improves learning performance across various configurations of deep RL algorithms, UTD, network architectures, and environments, achieving SOTA performance on challenging DMC Humanoid tasks.
Abstract:This paper revisits linear precoding, namely match-filter (MF) and zero-forcing (ZF), in a semantic multiple-input multiple-output (MIMO) system empowered by generative AI. The aim is to examine whether interference, channel state information (CSI) accuracy, and scalability limitations in conventional MIMO systems remain critical. Theoretical analysis, which is based on the generative inference model and Lipschitz continuous assumptions, reveals reduced sensitivity to interference and channel imperfections, as well as performance inferiority in high-SINR regimes compared to conventional MIMO systems. Simulation results validate the analysis and show that MF achieves semantic performance comparable to ZF under both perfect and imperfect CSI. These findings suggest that semantic MIMO relaxes the needs for aggressive interference mitigation and highly accurate CSI, while improving scalability with reduced computational and implementation complexity.
Abstract:Constructing computer-aided design (CAD) models is labor-intensive but essential for engineering and manufacturing. Recent advances in Large Language Models (LLMs) have inspired the LLM-based CAD generation by representing CAD as command sequences. But these methods struggle in practical scenarios because command sequence representation does not support entity selection (e.g. faces or edges), limiting its ability to support complex editing operations such as chamfer or fillet. Further, the discretization of a continuous variable during sketch and extrude operations may result in topological errors. To address these limitations, we present Pointer-CAD, a novel LLM-based CAD generation framework that leverages a pointer-based command sequence representation to explicitly incorporate the geometric information of B-rep models into sequential modeling. In particular, Pointer-CAD decomposes CAD model generation into steps, conditioning the generation of each subsequent step on both the textual description and the B-rep generated from previous steps. Whenever an operation requires the selection of a specific geometric entity, the LLM predicts a Pointer that selects the most feature-consistent candidate from the available set. Such a selection operation also reduces the quantization error in the command sequence-based representation. To support the training of Pointer-CAD, we develop a data annotation pipeline that produces expert-level natural language descriptions and apply it to build a dataset of approximately 575K CAD models. Extensive experimental results demonstrate that Pointer-CAD effectively supports the generation of complex geometric structures and reduces segmentation error to an extremely low level, achieving a significant improvement over prior command sequence methods, thereby significantly mitigating the topological inaccuracies introduced by quantization error.
Abstract:Token Communication (TokenCom) is a new paradigm, motivated by the recent success of Large AI Models (LAMs) and Multimodal Large Language Models (MLLMs), where tokens serve as unified units of communication and computation, enabling efficient semantic- and goal-oriented information exchange in future wireless networks. In this paper, we propose a novel Video TokenCom framework for textual intent-guided multi-rate video communication with Unequal Error Protection (UEP)-based source-channel coding adaptation. The proposed framework integrates user-intended textual descriptions with discrete video tokenization and unequal error protection to enhance semantic fidelity under restrictive bandwidth constraints. First, discrete video tokens are extracted through a pretrained video tokenizer, while text-conditioned vision-language modeling and optical-flow propagation are jointly used to identify tokens that correspond to user-intended semantics across space and time. Next, we introduce a semantic-aware multi-rate bit-allocation strategy, in which tokens highly related to the user intent are encoded using full codebook precision, whereas non-intended tokens are represented through reduced codebook precision differential encoding, enabling rate savings while preserving semantic quality. Finally, a source and channel coding adaptation scheme is developed to adapt bit allocation and channel coding to varying resources and link conditions. Experiments on various video datasets demonstrate that the proposed framework outperforms both conventional and semantic communication baselines, in perceptual and semantic quality on a wide SNR range.
Abstract:Recent advancements in Multimodal Large Language Models (MLLMs) pursue omni-perception capabilities, yet integrating robust sensory grounding with complex reasoning remains a challenge, particularly for underrepresented regions. In this report, we introduce the research preview of MERaLiON2-Omni (Alpha), a 10B-parameter multilingual omni-perception tailored for Southeast Asia (SEA). We present a progressive training pipeline that explicitly decouples and then integrates "System 1" (Perception) and "System 2" (Reasoning) capabilities. First, we establish a robust Perception Backbone by aligning region-specific audio-visual cues (e.g., Singlish code-switching, local cultural landmarks) with a multilingual LLM through orthogonal modality adaptation. Second, to inject cognitive capabilities without large-scale supervision, we propose a cost-effective Generate-Judge-Refine pipeline. By utilizing a Super-LLM to filter hallucinations and resolve conflicts via a consensus mechanism, we synthesize high-quality silver data that transfers textual Chain-of-Thought reasoning to multimodal scenarios. Comprehensive evaluation on our newly introduced SEA-Omni Benchmark Suite reveals an Efficiency-Stability Paradox: while reasoning acts as a non-linear amplifier for abstract tasks (boosting mathematical and instruction-following performance significantly), it introduces instability in low-level sensory processing. Specifically, we identify Temporal Drift in long-context audio, where extended reasoning desynchronizes the model from acoustic timestamps, and Visual Over-interpretation, where logic overrides pixel-level reality. This report details the architecture, the data-efficient training recipe, and a diagnostic analysis of the trade-offs between robust perception and structured reasoning.