Abstract:In an environment where acoustic privacy or deliberate signal obfuscation is desired, it is necessary to mask the acoustic signature generated in essential operations. We consider the problem of masking the effect of an acoustic source in a target region where possible detection sensors are located. Masking is achieved by placing interference signals near the acoustic source. We introduce a theoretical and computational framework for designing such interference signals with the goal of minimizing the residual amplitude in the target region. For the three-dimensional (3D) forced wave equation with spherical symmetry, we derive analytical quasi-steady periodic solutions for several canonical cases. We examine the phenomenon of self-masking where an acoustic source with certain spatial forcing profile masks itself from detection outside its forcing footprint. We then use superposition of spherically symmetric solutions to investigate masking in a given target region. We analyze and optimize the performance of using one or two point-forces deployed near the acoustic source for masking in the target region. For the general case where the spatial forcing profile of the acoustic source lacks spherical symmetry, we develop an efficient numerical method for solving the 3D wave equation. Potential applications of this work include undersea acoustic communication security, undersea vehicles stealth, and protection against acoustic surveillance.
Abstract:Zero-shot referring image segmentation aims to locate and segment the target region based on a referring expression, with the primary challenge of aligning and matching semantics across visual and textual modalities without training. Previous works address this challenge by utilizing Vision-Language Models and mask proposal networks for region-text matching. However, this paradigm may lead to incorrect target localization due to the inherent ambiguity and diversity of free-form referring expressions. To alleviate this issue, we present LGD (Leveraging Generative Descriptions), a framework that utilizes the advanced language generation capabilities of Multi-Modal Large Language Models to enhance region-text matching performance in Vision-Language Models. Specifically, we first design two kinds of prompts, the attribute prompt and the surrounding prompt, to guide the Multi-Modal Large Language Models in generating descriptions related to the crucial attributes of the referent object and the details of surrounding objects, referred to as attribute description and surrounding description, respectively. Secondly, three visual-text matching scores are introduced to evaluate the similarity between instance-level visual features and textual features, which determines the mask most associated with the referring expression. The proposed method achieves new state-of-the-art performance on three public datasets RefCOCO, RefCOCO+ and RefCOCOg, with maximum improvements of 9.97% in oIoU and 11.29% in mIoU compared to previous methods.