Abstract:Learning a physical model from video data that can comprehend physical laws and predict the future trajectories of objects is a formidable challenge in artificial intelligence. Prior approaches either leverage various Partial Differential Equations (PDEs) as soft constraints in the form of PINN losses, or integrate physics simulators into neural networks; however, they often rely on strong priors or high-quality geometry reconstruction. In this paper, we propose CausalGS, a framework that learns the causal dynamics of complex dynamic 3D scenes solely from multi-view videos, while dispensing with the reliance on explicit priors. At its core is an inverse physics inference module that decouples the complex dynamics problem from the video into the joint inference of two factors: the initial velocity field representing the scene's kinematics, and the intrinsic material properties governing its dynamics. This inferred physical information is then utilized within a differentiable physics simulator to guide the learning process in a physics-regularized manner. Extensive experiments demonstrate that CausalGS surpasses the state-of-the-art on the highly challenging task of long-term future frame extrapolation, while also exhibiting advanced performance in novel view interpolation. Crucially, our work shows that, without any human annotation, the model is able to learn the complex interactions between multiple physical properties and understand the causal relationships driving the scene's dynamic evolution, solely from visual observations.




Abstract:Continual learning is one of the many areas of machine learning research. For the goal of strong artificial intelligence that can mimic human-level intelligence, AI systems would have the ability to adapt to ever-changing scenarios and learn new knowledge continuously without forgetting previously acquired knowledge. A phenomenon called catastrophic forgetting emerges when a machine learning model is trained across multiple tasks. The model's performance on previously learned tasks may drop dramatically during the learning process of the newly seen task. Some continual learning strategies have been proposed to address the catastrophic forgetting problem. Recently, continual learning has also been studied in the context of quantum machine learning. By leveraging the elastic weight consolidation method, a single quantum classifier can perform multiple tasks after being trained consecutively on those tasks. In this work, we incorporate the gradient episodic memory method to train a variational quantum classifier. The gradient of the current task is projected to the closest gradient, avoiding the increase of the loss at previous tasks, but allowing the decrease. We use six quantum state classification tasks to benchmark this method. Numerical simulation results show that better performance is obtained compared to the elastic weight consolidation method. Furthermore, positive transfer of knowledge to previous tasks is observed, which means the classifier's performance on previous tasks is enhanced rather than compromised while learning a new task.