Abstract:Text rendering has recently emerged as one of the most challenging frontiers in visual generation, drawing significant attention from large-scale diffusion and multimodal models. However, text editing within images remains largely unexplored, as it requires generating legible characters while preserving semantic, geometric, and contextual coherence. To fill this gap, we introduce TextEditBench, a comprehensive evaluation benchmark that explicitly focuses on text-centric regions in images. Beyond basic pixel manipulations, our benchmark emphasizes reasoning-intensive editing scenarios that require models to understand physical plausibility, linguistic meaning, and cross-modal dependencies. We further propose a novel evaluation dimension, Semantic Expectation (SE), which measures reasoning ability of model to maintain semantic consistency, contextual coherence, and cross-modal alignment during text editing. Extensive experiments on state-of-the-art editing systems reveal that while current models can follow simple textual instructions, they still struggle with context-dependent reasoning, physical consistency, and layout-aware integration. By focusing evaluation on this long-overlooked yet fundamental capability, TextEditBench establishes a new testing ground for advancing text-guided image editing and reasoning in multimodal generation.
Abstract:Mild traumatic brain injuries (mTBI) are a highly prevalent condition with heterogeneous outcomes between individuals. A key factor governing brain tissue deformation and the risk of mTBI is the rotational kinematics of the head. Instrumented mouthguards are a widely accepted method for measuring rotational head motions, owing to their robust sensor-skull coupling. However, wearing mouthguards is not feasible in all situations, especially for long-term data collection. Therefore, alternative wearable devices are needed. In this study, we present an improved design and data processing scheme for an instrumented headband. Our instrumented headband utilizes an array of inertial measurement units (IMUs) and a new data-processing scheme based on continuous wavelet transforms to address sources of error in the IMU measurements. The headband performance was evaluated in the laboratory on an anthropomorphic test device, which was impacted with a soccer ball to replicate soccer heading. When comparing the measured peak rotational velocities (PRV) and peak rotational accelerations (PRA) between the reference sensors and the headband for impacts to the front of the head, the correlation coefficients (r) were 0.80 and 0.63, and the normalized root mean square error (NRMSE) values were 0.20 and 0.28, respectively. However, when considering all impact locations, r dropped to 0.42 and 0.34 and NRMSE increased to 0.5 and 0.41 for PRV and PRA, respectively. This new instrumented headband improves upon previous headband designs in reconstructing the rotational head kinematics resulting from frontal soccer ball impacts, providing a potential alternative to instrumented mouthguards.