Abstract:Signal dimension, defined here as the number of copies with different delays or angular shifts, is a prerequisite for many high-resolution delay estimation and direction-finding algorithms in sensing and communication systems. Thus, correctly estimating signal dimension itself becomes crucial. In this paper, we present a deep learning-based signal dimension estimator (DLSDE) with single-snapshot observation in the example application of phased array radar. Unlike traditional model-based and existing deep learning-based signal dimension estimators relying on eigen-decomposition and information criterion, to which multiple data snapshots would be needed, the proposed DLSDE uses two-dimensional convolutional neural network (2D-CNN) to automatically develop features corresponding to the dimension of the received signal. Our study shows that DLSDE significantly outperforms traditional methods in terms of the successful detection rate and resolution. In a phased array radar with 32 antenna elements, DLSDE improves detection Signal to Noise Ratio (SNR) by >15dB and resolution by >1{\deg}. This makes the proposed method suitable for distinguishing multiple signals that are spatially correlated or have small angular separation. More importantly, our solution operates with a single snapshot signal, which is incompatible with other existing deep learning-based methods.
Abstract:Despite bilingual speakers frequently using mixed-language queries in web searches, Information Retrieval (IR) research on them remains scarce. To address this, we introduce MiLQ,Mixed-Language Query test set, the first public benchmark of mixed-language queries, confirmed as realistic and highly preferred. Experiments show that multilingual IR models perform moderately on MiLQ and inconsistently across native, English, and mixed-language queries, also suggesting code-switched training data's potential for robust IR models handling such queries. Meanwhile, intentional English mixing in queries proves an effective strategy for bilinguals searching English documents, which our analysis attributes to enhanced token matching compared to native queries.
Abstract:As sixth-generation (6G) networks advance, large language models (LLMs) are increasingly integrated into 6G infrastructure to enhance network management and intelligence. However, traditional LLMs architecture struggle to meet the stringent latency and security requirements of 6G, especially as the increasing in sequence length leads to greater task complexity. This paper proposes Edge-Prompt, a cloud-edge collaborative framework based on a hierarchical attention splicing mechanism. EdgePrompt employs distributed key-value (KV) pair optimization techniques to accelerate inference and adapt to network conditions. Additionally, to reduce the risk of data leakage, EdgePrompt incorporates a privacy preserving strategy by isolating sensitive information during processing. Experiments on public dataset show that EdgePrompt effectively improves the inference throughput and reduces the latency, which provides a reliable solution for LLMs deployment in 6G environments.