Abstract:Existing Vision-Language Models often struggle with complex, multi-question reasoning tasks where partial correctness is crucial for effective learning. Traditional reward mechanisms, which provide a single binary score for an entire response, are too coarse to guide models through intricate problems with multiple sub-parts. To address this, we introduce StructVRM, a method that aligns multimodal reasoning with Structured and Verifiable Reward Models. At its core is a model-based verifier trained to provide fine-grained, sub-question-level feedback, assessing semantic and mathematical equivalence rather than relying on rigid string matching. This allows for nuanced, partial credit scoring in previously intractable problem formats. Extensive experiments demonstrate the effectiveness of StructVRM. Our trained model, Seed-StructVRM, achieves state-of-the-art performance on six out of twelve public multimodal benchmarks and our newly curated, high-difficulty STEM-Bench. The success of StructVRM validates that training with structured, verifiable rewards is a highly effective approach for advancing the capabilities of multimodal models in complex, real-world reasoning domains.
Abstract:Accurately locating the start and end time of an action in untrimmed videos is a challenging task. One of the important reasons is the boundary of action is not highly distinguishable, and the features around the boundary are difficult to discriminate. To address this problem, we propose a novel framework for temporal action proposal generation, namely Continuous Multi-stage Network (CMSN), which divides a video that contains a complete action instance into six stages, namely Backgroud, Ready, Start, Confirm, End, Follow. To distinguish between Ready and Start, End and Follow more accurately, we propose a novel loss function, Variable Margin Cosine Loss (VMCL), which allows for different margins between different categories. Our experiments on THUMOS14 show that the proposed method for temporal proposal generation performs better than the state-of-the-art methods using the same network architecture and training dataset.