"Nested Operator Inference for Multifidelity Uncertainty Quantification in Ice Sheet Simulations"
Nicole Aretz
University of Texas at Austin
Friday, Jan 16, 2026
- Colloquium - 499 DSL Seminar Room
- 03:30 to 04:30 PM Eastern Time (US and Canada)
Click Here to Join via Zoom
Meeting # 942 7359 5552
Zoom Meeting # 942 7359 5552
Abstract:
We present a nested Operator Inference method for data-driven learning of physics-based reduced-order models. The goal of the approach is to approximate the solution of highly accurate but computationally expensive “full-order” models, for example to enable multifidelity uncertainty quantification.
Projection-based model order reduction methods exploit the intrinsic low-dimensionality of the full-order solution manifold. These reduced-order models (ROMs) typically achieve significant computational savings while remaining physically interpretable through the governing equations. Operator Inference (OpInf) is a data-driven learning technique to construct projection-based ROMs without accessing the full-order operators. Because the degrees of freedom in the classic OpInf learning problem scale polynomially in the dimension of the reduced space, classic OpInf requires precise regularization to balance the numerical stability of the OpInf learning problem, the structural stability of the learned ROM, and the achieved reconstruction accuracy. Nested OpInf exploits the inherent hierarchy within the reduced space to iteratively construct initial guesses for the OpInf learning problem that prioritize the interactions of the dominant modes. The initial guess computed for any target reduced dimension corresponds to a ROM with provably smaller or equal snapshot reconstruction error than with standard OpInf. Driven by the need of multifidelity uncertainty quantification methods for different types of surrogate models, we show how to use nested OpInf to build a ROM of the Greenland ice sheet. Under moderate climate forcing, the learned ROM achieves four orders of magnitude in computational speed-up while keeping the generalization error for unseen parameters below 5% over a 30 year simulation horizon.
