Abstract:How can we build surrogate solvers that train on small domains but scale to larger ones without intrusive access to PDE operators? Inspired by the Data-Driven Finite Element Method (DD-FEM) framework for modular data-driven solvers, we propose the Latent Space Element Method (LSEM), an element-based latent surrogate assembly approach in which a learned subdomain ("element") model can be tiled and coupled to form a larger computational domain. Each element is a LaSDI latent ODE surrogate trained from snapshots on a local patch, and neighboring elements are coupled through learned directional interaction terms in latent space, avoiding Schwarz iterations and interface residual evaluations. A smooth window-based blending reconstructs a global field from overlapping element predictions, yielding a scalable assembled latent dynamical system. Experiments on the 1D Burgers and Korteweg-de Vries equations show that LSEM maintains predictive accuracy while scaling to spatial domains larger than those seen in training. LSEM offers an interpretable and extensible route toward foundation-model surrogate solvers built from reusable local models.
Abstract:Large language models (LLMs) generate diverse, situated, persuasive texts from a plurality of potential perspectives, influenced heavily by their prompts and training data. As part of LLM adoption, we seek to characterize - and ideally, manage - the socio-cultural values that they express, for reasons of safety, accuracy, inclusion, and cultural fidelity. We present a validated approach to automatically (1) extracting heterogeneous latent value propositions from texts, (2) assessing resonance and conflict of values with texts, and (3) combining these operations to characterize the pluralistic value alignment of human-sourced and LLM-sourced textual data.
Abstract:The reconstruction of current distributions from samples of their induced magnetic field is a challenging problem due to multiple factors. First, the problem of reconstructing general three dimensional current distributions is ill-posed. Second, the current-to-field operator performs a low-pass filter that dampens high-spatial frequency information, so that even in situations where the inversion is formally possible, attempting to employ the formal inverse will result in solutions with unacceptable noise. Most contemporary methods for reconstructing current distributions in two dimensions are based on Fourier techniques and apply a low pass filter to the $B$-field data, which prevents excessive noise amplification during reconstruction at the cost of admitting blurring in the reconstructed solution. In this report, we present a method of current recovery based on penalizing the $L1$ norm of the curl of the current distribution. The utility of this method is based on the observation that in microelectronics settings, the conductivity is piecewise constant. We also reconstruct the current fields using a divergence-free wavelet basis. This has the advantage of automatically enforcing current continuity and halving the number of unknowns that must be solved for. Additionally, the curl operator can be computed exactly and analytically in this wavelet expansion, which simplifies the application of the $L1-\textrm{curl}$ regularizer. We demonstrate improved reconstruction quality relative to Fourier-based techniques on both simulated and laboratory-acquired magnetic field data.