Abstract:This paper presents a physics-informed framework that integrates graph convolutional networks (GCN) with long short-term memory (LSTM) architecture to forecast microstructure evolution over long time horizons in both 2D and 3D with remarkable performance across varied metrics. The proposed framework is composition-aware, trained jointly on datasets with different compositions, and operates in latent graph space, which enables the model to capture compositions and morphological dynamics while remaining computationally efficient. Compressing and encoding phase-field simulation data with convolutional autoencoders and operating in Latent graph space facilitates efficient modeling of microstructural evolution across composition, dimensions, and long-term horizons. The framework captures the spatial and temporal patterns of evolving microstructures while enabling long-range forecasting at reduced computational cost after training.
Abstract:Inertial motion capture systems widely use low-cost IMUs to obtain the orientation of human body segments, but these sensors alone are unable to estimate link positions. Therefore, this research used a SLAM method in conjunction with inertial data fusion to estimate link positions. SLAM is a method that tracks a target in a reconstructed map of the environment using a camera. This paper proposes quaternion-based extended and square-root unscented Kalman filters (EKF & SRUKF) algorithms for pose estimation. The Kalman filters use measurements based on SLAM position data, multi-link biomechanical constraints, and vertical referencing to correct errors. In addition to the sensor biases, the fusion algorithm is capable of estimating link geometries, allowing the imposing of biomechanical constraints without a priori knowledge of sensor positions. An optical tracking system is used as a reference of ground-truth to experimentally evaluate the performance of the proposed algorithm in various scenarios of human arm movements. The proposed algorithms achieve up to 5.87 (cm) and 1.1 (deg) accuracy in position and attitude estimation. Compared to the EKF, the SRUKF algorithm presents a smoother and higher convergence rate but is 2.4 times more computationally demanding. After convergence, the SRUKF is up to 17% less and 36% more accurate than the EKF in position and attitude estimation, respectively. Using an absolute position measurement method instead of SLAM produced 80% and 40%, in the case of EKF, and 60% and 6%, in the case of SRUKF, less error in position and attitude estimation, respectively.