Over the last decade, parametric uncertainty quantification (UQ) methods
have reached a level of maturity, while the same can not be said about
representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions,
phenomenological parameterizations or constitutive laws,
is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model
calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios.
This work will overview a principled path for representing and quantifying
model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can
effectively differentiate model structural deficiencies from those of data acquisition.
The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.