Abstract:A chaotic system is a highly volatile system characterized by its sensitive dependence on initial conditions and outside factors. Chaotic systems are prevalent throughout the world today: in weather patterns, disease outbreaks, and even financial markets. Chaotic systems are seen in every field of science and humanities, so being able to predict these systems is greatly beneficial to society. In this study, we evaluate 10 different machine learning models and neural networks [1] based on Root Mean Squared Error (RMSE) and R^2 values for their ability to predict one of these systems, the multi-pendulum. We begin by generating synthetic data representing the angles of the pendulum over time using the Runge Kutta Method for solving 4th Order Differential Equations (ODE-RK4) [2]. At first, we used the single-step sliding window approach, predicting the 50st step after training for steps 0-49 and so forth. However, to more accurately cover chaotic motion and behavior in these systems, we transitioned to a time-step based approach. Here, we trained the model/network on many initial angles and tested it on a completely new set of initial angles, or 'in-between' to capture chaotic motion to its fullest extent. We also evaluated the stability of the system using Lyapunov exponents. We concluded that for a double pendulum, the best model was the Long Short Term Memory Network (LSTM)[3] for the sliding window and time step approaches in both friction and frictionless scenarios. For triple pendulum, the Vanilla Recurrent Neural Network (VRNN)[4] was the best for the sliding window and Gated Recurrent Network (GRU) [5] was the best for the time step approach, but for friction, LSTM was the best.
Abstract:The growing digital landscape of fashion e-commerce calls for interactive and user-friendly interfaces for virtually trying on clothes. Traditional try-on methods grapple with challenges in adapting to diverse backgrounds, poses, and subjects. While newer methods, utilizing the recent advances of diffusion models, have achieved higher-quality image generation, the human-centered dimensions of mobile interface delivery and privacy concerns remain largely unexplored. We present Mobile Fitting Room, the first on-device diffusion-based virtual try-on system. To address multiple inter-related technical challenges such as high-quality garment placement and model compression for mobile devices, we present a novel technical pipeline and an interface design that enables privacy preservation and user customization. A usage scenario highlights how our tool can provide a seamless, interactive virtual try-on experience for customers and provide a valuable service for fashion e-commerce businesses.