Abstract:The disintegration of liquid sheets into ligaments and droplets involves highly transient, multi-scale dynamics that are difficult to quantify from high-speed shadowgraphy images. Identifying droplets, ligaments, and blobs formed during breakup, along with tracking across frames, is essential for spray analysis. However, conventional multi-object tracking frameworks impose strict one-to-one temporal associations and cannot represent one-to-many fragmentation events. In this study, we present a two-stage deep learning framework for object detection and temporal relationship modeling across frames. The framework captures ligament deformation, fragmentation, and parent-child lineage during liquid sheet disintegration. In the first stage, a Faster R-CNN with a ResNet-50 backbone and Feature Pyramid Network detects and classifies ligaments and droplets in high-speed shadowgraphy recordings of an impinging Carbopol gel jet. A morphology-preserving synthetic data generation strategy augments the training set without introducing physically implausible configurations, achieving a held-out F1 score of up to 0.872 across fourteen original-to-synthetic configurations. In the second stage, a Transformer-augmented multilayer perceptron classifies inter-frame associations into continuation, fragmentation (one-to-many), and non-association using physics-informed geometric features. Despite severe class imbalance, the model achieves 86.1% accuracy, 93.2% precision, and perfect recall (1.00) for fragmentation events. Together, the framework enables automated reconstruction of fragmentation trees, preservation of parent-child lineage, and extraction of breakup statistics such as fragment multiplicity and droplet size distributions. By explicitly identifying children droplets formed from ligament fragmentation, the framework provides automated analysis of the primary atomization mode.
Abstract:In this paper, we propose a multi-mutation optimization algorithm, Differential Evolution with Multi-Mutation Operator-Guided Communication (DE-MMOGC), implemented to improve the performance and convergence abilities of standard differential evolution in uncertain environments. DE-MMOGC introduces a communication-guided scheme integrated with multiple mutation operators to encourage exploration and avoid premature convergence. Along with this, it includes a dynamic operator selection mechanism to use the best-performing operator over successive generations. To assimilate real-world uncertainties and missing observations into the predictive model, the proposed algorithm is combined with the Ensemble Kalman Filter. To evaluate the efficacy of the proposed DE-MMOGC in uncertain systems, the unified framework is applied to improve the predictive accuracy of crop simulation models. These simulation models are essential to precision agriculture, as they make it easier to estimate crop growth in a variety of unpredictable weather scenarios. Additionally, precisely calibrating these models raises a challenge due to missing observations. Hence, the simplified WOFOST crop simulation model is incorporated in this study for leaf area index (LAI)-based crop yield estimation. DE-MMOGC enhances the WOFOST performance by optimizing crucial weather parameters (temperature and rainfall), since these parameters are highly uncertain across different crop varieties, such as wheat, rice, and cotton. The experimental study shows that DE-MMOGC outperforms the traditional evolutionary optimizers and achieves better correlation with real LAI values. We found that DE-MMOGC is a resilient solution for crop monitoring.




Abstract:Body and face motion play an integral role in communication. They convey crucial information on the participants. Advances in generative modeling and multi-modal learning have enabled motion generation from signals such as speech, conversational context and visual cues. However, generating expressive and coherent face and body dynamics remains challenging due to the complex interplay of verbal / non-verbal cues and individual personality traits. This survey reviews body and face motion generation, covering core concepts, representations techniques, generative approaches, datasets and evaluation metrics. We highlight future directions to enhance the realism, coherence and expressiveness of avatars in dyadic settings. To the best of our knowledge, this work is the first comprehensive review to cover both body and face motion. Detailed resources are listed on https://lownish23csz0010.github.io/mogen/.