Abstract:Surgical video segmentation is fundamental to computer-assisted surgery. In practice, surgeons need to dynamically specify targets throughout extended procedures, using heterogeneous cues such as visual selections, textual expressions, or audio instructions. However, existing Promptable Video Object Segmentation (PVOS) methods are typically restricted to a single prompt modality and rely on coupled frameworks that cause optimization interference between target initialization and tracking. Moreover, these methods produce hallucinated predictions when the target is absent and suffer from accumulated mask drift without failure recovery. To address these challenges, we present UniSurgSAM, a unified PVOS model enabling reliable surgical video segmentation through visual, textual, or audio prompts. Specifically, UniSurgSAM employs a decoupled two-stage framework that independently optimizes initialization and tracking to resolve the optimization interference. Within this framework, we introduce three key designs for reliability: presence-aware decoding that models target absence to suppress hallucinations; boundary-aware long-term tracking that prevents mask drift over extended sequences; and adaptive state transition that closes the loop between stages for failure recovery. Furthermore, we establish a multi-modal and multi-granular benchmark from four public surgical datasets with precise instance-level masklets. Extensive experiments demonstrate that UniSurgSAM achieves state-of-the-art performance in real time across all prompt modalities and granularities, providing a practical foundation for computer-assisted surgery. Code and datasets will be available at https://jinlab-imvr.github.io/UniSurgSAM.




Abstract:Accurate lesion segmentation in whole-body PET/CT scans is crucial for cancer diagnosis and treatment planning, but limited datasets often hinder the performance of automated segmentation models. In this paper, we explore the potential of leveraging the deep prior from a generative model to serve as a data augmenter for automated lesion segmentation in PET/CT scans. We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions. Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data. We then compare the performance of segmentation models trained on the original and augmented datasets. Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach. In a nutshell, this work presents a promising direction for improving lesion segmentation in whole-body PET/CT scans with limited datasets, potentially enhancing the accuracy and reliability of cancer diagnostics.