Abstract:RGB-D scene parsing methods effectively capture both semantic and geometric features of the environment, demonstrating great potential under challenging conditions such as extreme weather and low lighting. However, existing RGB-D scene parsing methods predominantly rely on supervised training strategies, which require a large amount of manually annotated pixel-level labels that are both time-consuming and costly. To overcome these limitations, we introduce DepthMatch, a semi-supervised learning framework that is specifically designed for RGB-D scene parsing. To make full use of unlabeled data, we propose complementary patch mix-up augmentation to explore the latent relationships between texture and spatial features in RGB-D image pairs. We also design a lightweight spatial prior injector to replace traditional complex fusion modules, improving the efficiency of heterogeneous feature fusion. Furthermore, we introduce depth-guided boundary loss to enhance the model's boundary prediction capabilities. Experimental results demonstrate that DepthMatch exhibits high applicability in both indoor and outdoor scenes, achieving state-of-the-art results on the NYUv2 dataset and ranking first on the KITTI Semantics benchmark.
Abstract:Dynamic adaptive streaming over HTTP provides the work of most multimedia services, however, the nature of this technology further complicates the assessment of the QoE (Quality of Experience). In this paper, the influence of various objective factors on the subjective estimation of the QoE of streaming video is studied. The paper presents standard and handcrafted features, shows their correlation and p-Value of significance. VQA (Video Quality Assessment) models based on regression and gradient boosting with SRCC reaching up to 0.9647 on the validation subsample are proposed. The proposed regression models are adapted for applied applications (both with and without a reference video); the Gradient Boosting Regressor model is perspective for further improvement of the quality estimation model. We take SQoE-III database, so far the largest and most realistic of its kind. The VQA (video quality assessment) models are available at https://github.com/AleksandrIvchenko/QoE-assesment