Abstract:This paper proposes a blocker-aware multicarrier integrated sensing and communication (ISAC)-non orthogonal multiple access (NOMA) system, leveraging hybrid beamforming and dynamic power allocation to enhance spectrum efficiency in 6G networks. Recognizing the performance degradation caused by environmental blockers, the system introduces a joint waveform design that ensures robust operation under varying channel conditions. A channel switching mechanism is deployed to reroute communication through alternative non-line-of-sight paths when the primary line-of-sight links are obstructed. Moreover, a dynamic power allocation strategy enforces a minimum rate constraint for the weak NOMA user, ensuring consistent quality of service. Extensive simulations over multiple blockage scenarios and signal to noise (SNR) conditions validate the effectiveness of the proposed solution. Notably, under severe blockage, the system achieves up to a 400% sensing rate enhancement at 15 dB SNR, with only a 20% reduction in communication rate. These results corroborate the system's ability to adapt and optimize joint sensing-communication performance in practical deployment environments.
Abstract:Transformer-based self-attention mechanism serves as the core of modern language models, yet it often suffers from localization, where attentions collapse onto a limited subset of tokens and fail to capture long-range dependencies. To address this issue, we propose Self-Attention One-step Belief Propagation (SAOBP), a refinement framework that injects multi-hop relationships through a belief propagation process. To interpret and quantify these interactions, we introduce Global Token Dependency (GTD) that captures the relative contribution of multihop connections within the attention graph. Empirical results indicate that SAOBP helps prevent entropy collapse in deeper layers and adaptively maintains GTD at task-appropriate levels, thereby supporting improvements in model performance. Importantly, we observe competitive gains in small-scale models, highlighting its potential for improving inference quality in resource-constrained scenarios.
Abstract:Database normalization is crucial to preserving data integrity. However, it is time-consuming and error-prone, as it is typically performed manually by data engineers. To this end, we present Miffie, a database normalization framework that leverages the capability of large language models. Miffie enables automated data normalization without human effort while preserving high accuracy. The core of Miffie is a dual-model self-refinement architecture that combines the best-performing models for normalized schema generation and verification, respectively. The generation module eliminates anomalies based on the feedback of the verification module until the output schema satisfies the requirement for normalization. We also carefully design task-specific zero-shot prompts to guide the models for achieving both high accuracy and cost efficiency. Experimental results show that Miffie can normalize complex database schemas while maintaining high accuracy.
Abstract:Scientific reasoning requires not only long-chain reasoning processes, but also knowledge of domain-specific terminologies and adaptation to updated findings. To deal with these challenges for scientific reasoning, we introduce RAISE, a step-by-step retrieval-augmented framework which retrieves logically relevant documents from in-the-wild corpus. RAISE is divided into three steps: problem decomposition, logical query generation, and logical retrieval. We observe that RAISE consistently outperforms other baselines on scientific reasoning benchmarks. We analyze that unlike other baselines, RAISE retrieves documents that are not only similar in terms of the domain knowledge, but also documents logically more relevant.