Abstract:We introduce Wave-PDE Nets, a neural architecture whose elementary operation is a differentiable simulation of the second-order wave equation. Each layer propagates its hidden state as a continuous field through a medium with trainable spatial velocity c(x) and damping {\gamma}(x). A symplectic spectral solver based on FFTs realises this propagation in O(nlog n) time. This oscillatory, global mechanism provides a powerful alternative to attention and first-order state-space models. We prove that a single Wave-PDE layer is a universal approximator. On language and vision benchmarks, Wave-PDE Nets match or exceed Transformer performance while demonstrating superior practical efficiency, reducing wall-clock time by up to 30% and peak memory by 25%. Ablation studies confirm the critical role of symplectic integration and a spectral Laplacian for stability and performance. Visualizations of the learned physical parameters reveal that the model learns intuitive strategies for information propagation. These results position Wave-PDE Nets as a computationally efficient and robust architecture with a strong physical inductive bias.
Abstract:Forecasting chaotic systems is a cornerstone challenge in many scientific fields, complicated by the exponential amplification of even infinitesimal prediction errors. Modern machine learning approaches often falter due to two opposing pitfalls: over-specializing on a single, well-known chaotic system (e.g., Lorenz-63), which limits generalizability, or indiscriminately mixing vast, unrelated time-series, which prevents the model from learning the nuances of any specific dynamical regime. We propose Curriculum Chaos Forecasting (CCF), a training paradigm that bridges this gap. CCF organizes training data based on fundamental principles of dynamical systems theory, creating a curriculum that progresses from simple, periodic behaviors to highly complex, chaotic dynamics. We quantify complexity using the largest Lyapunov exponent and attractor dimension, two well-established metrics of chaos. By first training a sequence model on predictable systems and gradually introducing more chaotic trajectories, CCF enables the model to build a robust and generalizable representation of dynamical behaviors. We curate a library of over 50 synthetic ODE/PDE systems to build this curriculum. Our experiments show that pre-training with CCF significantly enhances performance on unseen, real-world benchmarks. On datasets including Sunspot numbers, electricity demand, and human ECG signals, CCF extends the valid prediction horizon by up to 40% compared to random-order training and more than doubles it compared to training on real-world data alone. We demonstrate that this benefit is consistent across various neural architectures (GRU, Transformer) and provide extensive ablations to validate the importance of the curriculum's structure.
Abstract:Mixture-of-Experts (MoE) layers scale transformers by routing tokens to a sparse subset of feed-forward experts. Token-level routing, however, assigns an entire semantic spectrum to each expert, creating capacity bottlenecks, load-balancing pathologies, and limited specialization. We introduce SliceMoE, an architecture that routes contiguous slices of a token's hidden vector. A d-dimensional embedding is partitioned into S slices, and for each slice, a lightweight shared router predicts the top-k experts. Experts operate on their assigned slices independently, and outputs are reassembled, maintaining per-token FLOP efficiency. Because slices from different tokens interleave within an expert, utilization is naturally smoother. We propose a slice-level capacity loss, cross-slice dropout, and efficient fused batched GEMM kernels. Experiments on WikiText-103 language modeling, WMT En-De translation, and three text-classification datasets show SliceMoE attains up to 1.7x faster inference than dense baselines, 12 to 18 percent lower perplexity than parameter-matched token-MoE, and improved expert balance, with interpretable expertise over syntactic versus semantic subspaces.
Abstract:Recent advances in uncertainty estimation for Large Language Models (LLMs) during downstream adaptation have addressed key challenges of reliability and simplicity. However, existing Bayesian methods typically require multiple sampling iterations during inference, creating significant efficiency issues that limit practical deployment. In this paper, we investigate the possibility of eliminating the need for test-time sampling for LLM uncertainty estimation. Specifically, when given an off-the-shelf Bayesian LLM, we distill its aligned confidence into a non-Bayesian student LLM by minimizing the divergence between their predictive distributions. Unlike typical calibration methods, our distillation is carried out solely on the training dataset without the need of an additional validation dataset. This simple yet effective approach achieves N-times more efficient uncertainty estimation during testing, where N is the number of samples traditionally required by Bayesian LLMs. Our extensive experiments demonstrate that uncertainty estimation capabilities on training data can successfully generalize to unseen test data through our distillation technique, consistently producing results comparable to (or even better than) state-of-the-art Bayesian LLMs.