Abstract:Recent advances in 3D reconstruction techniques and vision-language models have fueled significant progress in 3D semantic understanding, a capability critical to robotics, autonomous driving, and virtual/augmented reality. However, methods that rely on 2D priors are prone to a critical challenge: cross-view semantic inconsistencies induced by occlusion, image blur, and view-dependent variations. These inconsistencies, when propagated via projection supervision, deteriorate the quality of 3D Gaussian semantic fields and introduce artifacts in the rendered outputs. To mitigate this limitation, we propose CCL-LGS, a novel framework that enforces view-consistent semantic supervision by integrating multi-view semantic cues. Specifically, our approach first employs a zero-shot tracker to align a set of SAM-generated 2D masks and reliably identify their corresponding categories. Next, we utilize CLIP to extract robust semantic encodings across views. Finally, our Contrastive Codebook Learning (CCL) module distills discriminative semantic features by enforcing intra-class compactness and inter-class distinctiveness. In contrast to previous methods that directly apply CLIP to imperfect masks, our framework explicitly resolves semantic conflicts while preserving category discriminability. Extensive experiments demonstrate that CCL-LGS outperforms previous state-of-the-art methods. Our project page is available at https://epsilontl.github.io/CCL-LGS/.
Abstract:The shutter strategy applied to the photo-shooting process has a significant influence on the quality of the captured photograph. An improper shutter may lead to a blurry image, video discontinuity, or rolling shutter artifact. Existing works try to provide an independent solution for each issue. In this work, we aim to re-expose the captured photo in post-processing to provide a more flexible way of addressing those issues within a unified framework. Specifically, we propose a neural network-based image re-exposure framework. It consists of an encoder for visual latent space construction, a re-exposure module for aggregating information to neural film with a desired shutter strategy, and a decoder for 'developing' neural film into a desired image. To compensate for information confusion and missing frames, event streams, which can capture almost continuous brightness changes, are leveraged in computing visual latent content. Both self-attention layers and cross-attention layers are employed in the re-exposure module to promote interaction between neural film and visual latent content and information aggregation to neural film. The proposed unified image re-exposure framework is evaluated on several shutter-related image recovery tasks and performs favorably against independent state-of-the-art methods.