Information extraction is the process of automatically extracting structured information from unstructured text data.
Current multi-object tracking (MOT) algorithms typically overlook issues inherent in low-quality videos, leading to significant degradation in tracking performance when confronted with real-world image deterioration. Therefore, advancing the application of MOT algorithms in real-world low-quality video scenarios represents a critical and meaningful endeavor. To address the challenges posed by low-quality scenarios, inspired by vision-language models, this paper proposes a Visual Semantic Enhancement-guided Multi-Object Tracking framework (VSE-MOT). Specifically, we first design a tri-branch architecture that leverages a vision-language model to extract global visual semantic information from images and fuse it with query vectors. Subsequently, to further enhance the utilization of visual semantic information, we introduce the Multi-Object Tracking Adapter (MOT-Adapter) and the Visual Semantic Fusion Module (VSFM). The MOT-Adapter adapts the extracted global visual semantic information to suit multi-object tracking tasks, while the VSFM improves the efficacy of feature fusion. Through extensive experiments, we validate the effectiveness and superiority of the proposed method in real-world low-quality video scenarios. Its tracking performance metrics outperform those of existing methods by approximately 8% to 20%, while maintaining robust performance in conventional scenarios.
This paper shows how a multimodal large language model (MLLM) can expand urban measurement capacity and support tracking of place-based policy interventions. Using a structured, reason-then-estimate pipeline on street-view imagery, GPT-4o infers neighborhood poverty and tree canopy, which we embed in a quasi-experimental design evaluating the legacy of 1930s redlining. GPT-4o recovers the expected adverse socio-environmental legacy effects of redlining, with estimates statistically indistinguishable from authoritative sources, and it outperforms a conventional pixel-based segmentation baseline-consistent with the idea that holistic scene reasoning extracts higher-order information beyond object counts alone. These results position MLLMs as policy-grade instruments for neighborhood measurement and motivate broader validation across policy-evaluation settings.
Traditional query expansion techniques for addressing vocabulary mismatch problems in information retrieval are context-sensitive and may lead to performance degradation. As an alternative, document expansion research has gained attention, but existing methods such as Doc2Query have limitations including excessive preprocessing costs, increased index size, and reliability issues with generated content. To mitigate these problems and seek more structured and efficient alternatives, this study proposes a method that divides documents into chunk units and generates textual data for each chunk to simultaneously improve retrieval efficiency and accuracy. The proposed "Chunk Knowledge Generation Model" adopts a T5-based multi-task learning structure that simultaneously generates titles and candidate questions from each document chunk while extracting keywords from user queries. This approach maximizes computational efficiency by generating and extracting three types of semantic information in parallel through a single encoding and two decoding processes. The generated data is utilized as additional information in the retrieval system. GPT-based evaluation on 305 query-document pairs showed that retrieval using the proposed model achieved 95.41% accuracy at Top@10, demonstrating superior performance compared to document chunk-level retrieval. This study contributes by proposing an approach that simultaneously generates titles and candidate questions from document chunks for application in retrieval pipelines, and provides empirical evidence applicable to large-scale information retrieval systems by demonstrating improved retrieval accuracy through qualitative evaluation.
In the age of information overload, content management for online news articles relies on efficient summarization to enhance accessibility and user engagement. This article addresses the challenge of extractive text summarization by employing advanced machine learning techniques to generate concise and coherent summaries while preserving the original meaning. Using the Cornell Newsroom dataset, comprising 1.3 million article-summary pairs, we developed a pipeline leveraging BERT embeddings to transform textual data into numerical representations. By framing the task as a binary classification problem, we explored various models, including logistic regression, feed-forward neural networks, and long short-term memory (LSTM) networks. Our findings demonstrate that LSTM networks, with their ability to capture sequential dependencies, outperform baseline methods like Lede-3 and simpler models in F1 score and ROUGE-1 metrics. This study underscores the potential of automated summarization in improving content management systems for online news platforms, enabling more efficient content organization and enhanced user experiences.




Speculative decoding is a widely adopted technique for accelerating inference in large language models (LLMs), yet its application to vision-language models (VLMs) remains underexplored, with existing methods achieving only modest speedups (<1.5x). This gap is increasingly significant as multimodal capabilities become central to large-scale models. We hypothesize that large VLMs can effectively filter redundant image information layer by layer without compromising textual comprehension, whereas smaller draft models struggle to do so. To address this, we introduce Vision-Aware Speculative Decoding (ViSpec), a novel framework tailored for VLMs. ViSpec employs a lightweight vision adaptor module to compress image tokens into a compact representation, which is seamlessly integrated into the draft model's attention mechanism while preserving original image positional information. Additionally, we extract a global feature vector for each input image and augment all subsequent text tokens with this feature to enhance multimodal coherence. To overcome the scarcity of multimodal datasets with long assistant responses, we curate a specialized training dataset by repurposing existing datasets and generating extended outputs using the target VLM with modified prompts. Our training strategy mitigates the risk of the draft model exploiting direct access to the target model's hidden states, which could otherwise lead to shortcut learning when training solely on target model outputs. Extensive experiments validate ViSpec, achieving, to our knowledge, the first substantial speedup in VLM speculative decoding.
Integrated sensing and communication (ISAC) is a promising technique for expanding the functionalities of wireless networks with enhanced spectral efficiency. The 3rd Generation Partnership Project (3GPP) has defined six basic sensing operation modes in wireless networks. To further enhance the sensing capability of wireless networks, this paper proposes a new sensing operation mode, i.e., the base station (BS) and user equipment (UE) cooperative sensing. Specifically, after decoding the communication data, the UE further processes the received signal to extract the target sensing information. We propose an efficient algorithm for fusing the sensing results obtained by the BS and UE, by exploiting the geometric relationship among BS, UE and targets as well as the expected sensing quality in the BS monostatic and BS-UE bistatic sensing. The results show that the proposed data fusion method for cooperative sensing can effectively improve the position and velocity estimation accuracy of multiple targets, and provide a new approach on the expansion of the sensing pattern.




Natural language processing (NLP) is a key technology to extract important patient information from clinical narratives to support healthcare applications. The rapid development of large language models (LLMs) has revolutionized many NLP tasks in the clinical domain, yet their optimal use in patient information extraction tasks requires further exploration. This study examines LLMs' effectiveness in patient information extraction, focusing on LLM architectures, fine-tuning strategies, and multi-task instruction tuning techniques for developing robust and generalizable patient information extraction systems. This study aims to explore key concepts of using LLMs for clinical concept and relation extraction tasks, including: (1) encoder-only or decoder-only LLMs, (2) prompt-based parameter-efficient fine-tuning (PEFT) algorithms, and (3) multi-task instruction tuning on few-shot learning performance. We benchmarked a suite of LLMs, including encoder-based LLMs (BERT, GatorTron) and decoder-based LLMs (GatorTronGPT, Llama 3.1, GatorTronLlama), across five datasets. We compared traditional full-size fine-tuning and prompt-based PEFT. We explored a multi-task instruction tuning framework that combines both tasks across four datasets to evaluate the zero-shot and few-shot learning performance using the leave-one-dataset-out strategy.
Traditional single-input single-output (SISO) systems face fundamental limitations in achieving accurate three-dimensional (3D) localization due to limited spatial degrees of freedom (DoF) and the adverse impact of multipath propagation. This paper proposes a novel fluid antenna system (FAS)-active reconfigurable intelligent surface (ARIS) framework that transforms multipath effects from a hindrance into a resource for enhanced localization. By synergistically combining the signal amplification capabilities of ARIS with the spatial diversity enabled by FAS, the proposed system achieves robust 3D user equipment (UE) positioning -- without relying on auxiliary information such as time-of-arrival (ToA) or frequency diversity. The system exploits both line-of-sight (LoS) and non-line-of-sight (NLoS) components through a tailored signal decoupling strategy. We design novel UE pilot sequences and ARIS phase configurations to effectively separate LoS and NLoS channels, enabling independent parameter estimation. A multi-stage estimation algorithm is then applied: the multiple signal classification (MUSIC) algorithm estimates angle-of-arrival (AoA) from the direct path, while maximum likelihood estimation with interior-point refinement recovers cascaded channel parameters from the reflected path. Finally, geometric triangulation using least-squares estimation determines the UE's 3D position based on the extracted AoA information. Comprehensive performance analysis, including the derivation of Cram\'{e}r-Rao bounds for both channel and position estimation, establishes theoretical benchmarks. Simulation results confirm that the proposed FAS-ARIS framework achieves near-optimal localization accuracy while maintaining robustness in rich multipath environments -- effectively turning conventional localization challenges into advantages.
Wireless channel foundation model (WCFM) is a task-agnostic AI model that is pretrained on large-scale wireless channel datasets to learn a universal channel feature representation that can be used for a wide range of downstream tasks related to communications and sensing. While existing works on WCFM have demonstrated its great potentials in various tasks including beam prediction, channel prediction, localization, etc, the models are all trained using perfect (i.e., error-free and complete) channel information state (CSI) data which are generated with simulation tools. However, in practical systems where the WCFM is deployed, perfect CSI is not available. Instead, channel estimation needs to be first performed based on pilot signals over a subset of the resource elements (REs) to acquire a noisy version of the CSI (termed as degraded CSI), which significantly differs from the perfect CSI in some real-world environments with severe noise and interference. As a result, the feature representation generated by the WCFM is unable to reflect the characteristics of the true channel, yielding performance degradation in downstream tasks. To address this issue, in this paper we propose an enhanced wireless channel foundation model architecture with noise-plus-interference (NPI) suppression capability. In our approach, coarse estimates of the CSIs are first obtained. With these information, two projection matrices are computed to extract the NPI terms in the received signals, which are further processed by a NPI estimation and subtraction module. Finally, the resultant signal is passed through a CSI completion network to get a clean version of the CSI, which is used for feature extraction. Simulation results demonstrated that compared to the state-of-the-art solutions, WCFM with NPI suppression structure achieves improved performance on channel prediction task.
While graph neural networks (GNNs) have achieved great success in learning from graph-structured data, their reliance on local, pairwise message passing restricts their ability to capture complex, high-order subgraph patterns. leading to insufficient structural expressiveness. Recent efforts have attempted to enhance structural expressiveness by integrating random walk kernels into GNNs. However, these methods are inherently designed for graph-level tasks, which limits their applicability to other downstream tasks such as node classification. Moreover, their fixed kernel configurations hinder the model's flexibility in capturing diverse subgraph structures. To address these limitations, this paper proposes a novel Mixture of Subgraph Experts (MoSE) framework for flexible and expressive subgraph-based representation learning across diverse graph tasks. Specifically, MoSE extracts informative subgraphs via anonymous walks and dynamically routes them to specialized experts based on structural semantics, enabling the model to capture diverse subgraph patterns with improved flexibility and interpretability. We further provide a theoretical analysis of MoSE's expressivity within the Subgraph Weisfeiler-Lehman (SWL) Test, proving that it is more powerful than SWL. Extensive experiments, together with visualizations of learned subgraph experts, demonstrate that MoSE not only outperforms competitive baselines but also provides interpretable insights into structural patterns learned by the model.