Deep reinforcement learning (DRL) for fluidic pinball, three individually rotating cylinders in the uniform flow arranged in an equilaterally triangular configuration, can learn the efficient flow control strategies due to the validity of self-learning and data-driven state estimation for complex fluid dynamic problems. In this work, we present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball, i.e., force extremum and tracking, from cylinders' rotation. By adequately designing reward functions and encoding historical observations, and after automatic learning of thousands of iterations, the DRL-based control was shown to make reasonable and valid control decisions in nonparametric control parameter space, which is comparable to and even better than the optimal policy found through lengthy brute-force searching. Subsequently, one of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process. The finding from this work can control hydrodynamic force on the operation of fluidic pinball system and potentially pave the way for exploring efficient active flow control strategies in other complex fluid dynamic problems.
Calibration of highly dynamic multi-physics manufacturing processes such as electro-hydrodynamics-based additive manufacturing (AM) technologies (E-jet printing) is still performed by labor-intensive trial-and-error practices. These practices have hindered the broad adoption of these technologies, demanding a new paradigm of self-calibrating E-jet printing machines. To address this need, we developed GPJet, an end-to-end physics-informed Bayesian learning framework, and tested it on a virtual E-jet printing machine with in-process jet monitoring capabilities. GPJet consists of three modules: a) the Machine Vision module, b) the Physics-Based Modeling Module, and c) the Machine Learning (ML) module. We demonstrate that the Machine Vision module can extract high-fidelity jet features in real-time from video data using an automated parallelized computer vision workflow. In addition, we show that the Machine Vision module, combined with the Physics-based modeling module, can act as closed-loop sensory feedback to the Machine Learning module of high- and low-fidelity data. Powered by our data-centric approach, we demonstrate that the online ML planner can actively learn the jet process dynamics using video and physics with minimum experimental cost. GPJet brings us one step closer to realizing the vision of intelligent AM machines that can efficiently search complex process-structure-property landscapes and create optimized material solutions for a wide range of applications at a fraction of the cost and speed.
We demonstrate experimentally the feasibility of applying reinforcement learning (RL) in flow control problems by automatically discovering active control strategies without any prior knowledge of the flow physics. We consider the turbulent flow past a circular cylinder with the aim of reducing the cylinder drag force or maximizing the power gain efficiency by properly selecting the rotational speed of two small diameter cylinders, parallel to and located downstream of the larger cylinder. Given properly designed rewards and noise reduction techniques, after tens of towing experiments, the RL agent could discover the optimal control strategy, comparable to the optimal static control. While RL has been found to be effective in recent computer flow simulation studies, this is the first time that its effectiveness is demonstrated experimentally, paving the way for exploring new optimal active flow control strategies in complex fluid mechanics applications.