Abstract:Long-horizon autoregressive forecasting of chaotic dynamical systems remains challenging due to rapid error amplification and distribution shift: small one-step inaccuracies compound into physically inconsistent rollouts and collapse of large-scale statistics. We introduce MSR-HINE, a hierarchical implicit forecaster that augments multiscale latent priors with multi-rate recurrent modules operating at distinct temporal scales. At each step, coarse-to-fine recurrent states generate latent priors, an implicit one-step predictor refines the state with multiscale latent injections, and a gated fusion with posterior latents enforces scale-consistent updates; a lightweight hidden-state correction further aligns recurrent memories with fused latents. The resulting architecture maintains long-term context on slow manifolds while preserving fast-scale variability, mitigating error accumulation in chaotic rollouts. Across two canonical benchmarks, MSR-HINE yields substantial gains over a U-Net autoregressive baseline: on Kuramoto-Sivashinsky it reduces end-horizon RMSE by 62.8% at H=400 and improves end-horizon ACC by +0.983 (from -0.155 to 0.828), extending the ACC >= 0.5 predictability horizon from 241 to 400 steps; on Lorenz-96 it reduces RMSE by 27.0% at H=100 and improves end horizon ACC by +0.402 (from 0.144 to 0.545), extending the ACC >= 0.5 horizon from 58 to 100 steps.
Abstract:Deep Operator Networks (DeepONets) have recently emerged as powerful data-driven frameworks for learning nonlinear operators, particularly suited for approximating solutions to partial differential equations (PDEs). Despite their promising capabilities, the standard implementation of DeepONets, which typically employs fully connected linear layers in the trunk network, can encounter limitations in capturing complex spatial structures inherent to various PDEs. To address this, we introduce Fourier-embedded trunk networks within the DeepONet architecture, leveraging random Fourier feature mappings to enrich spatial representation capabilities. Our proposed Fourier-embedded DeepONet, FEDONet demonstrates superior performance compared to the traditional DeepONet across a comprehensive suite of PDE-driven datasets, including the two-dimensional Poisson equation, Burgers' equation, the Lorenz-63 chaotic system, Eikonal equation, Allen-Cahn equation, Kuramoto-Sivashinsky equation, and the Lorenz-96 system. Empirical evaluations of FEDONet consistently show significant improvements in solution reconstruction accuracy, with average relative L2 performance gains ranging between 2-3x compared to the DeepONet baseline. This study highlights the effectiveness of Fourier embeddings in enhancing neural operator learning, offering a robust and broadly applicable methodology for PDE surrogate modeling.