Abstract:A chaotic system is a highly volatile system characterized by its sensitive dependence on initial conditions and outside factors. Chaotic systems are prevalent throughout the world today: in weather patterns, disease outbreaks, and even financial markets. Chaotic systems are seen in every field of science and humanities, so being able to predict these systems is greatly beneficial to society. In this study, we evaluate 10 different machine learning models and neural networks [1] based on Root Mean Squared Error (RMSE) and R^2 values for their ability to predict one of these systems, the multi-pendulum. We begin by generating synthetic data representing the angles of the pendulum over time using the Runge Kutta Method for solving 4th Order Differential Equations (ODE-RK4) [2]. At first, we used the single-step sliding window approach, predicting the 50st step after training for steps 0-49 and so forth. However, to more accurately cover chaotic motion and behavior in these systems, we transitioned to a time-step based approach. Here, we trained the model/network on many initial angles and tested it on a completely new set of initial angles, or 'in-between' to capture chaotic motion to its fullest extent. We also evaluated the stability of the system using Lyapunov exponents. We concluded that for a double pendulum, the best model was the Long Short Term Memory Network (LSTM)[3] for the sliding window and time step approaches in both friction and frictionless scenarios. For triple pendulum, the Vanilla Recurrent Neural Network (VRNN)[4] was the best for the sliding window and Gated Recurrent Network (GRU) [5] was the best for the time step approach, but for friction, LSTM was the best.
Abstract:Deep reinforcement learning (DRL) is playing an increasingly important role in real-world applications. However, obtaining an optimally performing DRL agent for complex tasks, especially with sparse rewards, remains a significant challenge. The training of a DRL agent can be often trapped in a bottleneck without further progress. In this paper, we propose RICE, an innovative refining scheme for reinforcement learning that incorporates explanation methods to break through the training bottlenecks. The high-level idea of RICE is to construct a new initial state distribution that combines both the default initial states and critical states identified through explanation methods, thereby encouraging the agent to explore from the mixed initial states. Through careful design, we can theoretically guarantee that our refining scheme has a tighter sub-optimality bound. We evaluate RICE in various popular RL environments and real-world applications. The results demonstrate that RICE significantly outperforms existing refining schemes in enhancing agent performance.