Uncertainty-Aware Calculation of Analytical Gradients of Matrix-Interpolatory Reduced-Order Models for Efficient Structural Optimization

2026-02-26Computational Engineering, Finance, and Science

Computational Engineering, Finance, and Science
AI summary

The authors developed a smart method to improve complex system models by choosing which data points to study next, focusing on promising areas rather than trying to be perfect everywhere. They tested this on two mechanical models and used a special technique to reduce the complexity of the system. Their approach combines statistical learning with gradient-based search to find better designs while keeping computations manageable. The results showed reliable improvement but with some extra computational effort due to the gradient searches. Overall, their method fits well with existing tools and helps handle uncertainty in model predictions.

adaptive samplingmodel order reductionparametrized dynamical systemstransfer functioniterative rational Krylov algorithmsparse Bayesian regressionThompson samplingadjoint sensitivity analysisgradient-based optimization
Authors
Marcel Warzecha, Sebastian Resch-Schopper, Gerhard Müller
Abstract
This paper presents an adaptive sampling algorithm tailored for the optimization of parametrized dynamical systems using projection-based model order reduction. Unlike classical sampling strategies, this framework does not aim for a small approximation error in the global sense but focuses on identifying and refining promising regions early on while reducing expensive full order model evaluations. The algorithm is tested on two models: a Timoshenko beam and a Kelvin cell, which ought to be optimized in terms of the system output in the frequency domain. For that, different norms of the transfer function are used as the objective function, while up to two geometrical parameters form the vector of design variables. The sampled full order models are reduced using the iterative rational Krylov algorithm and reprojected into a global basis. Subsequently, the models are parametrized by performing sparse Bayesian regression on matrix entry level of the reduced operators. Thompson sampling is carried out using the posterior distribution of the polynomial coefficients in order to account for uncertainties in the trained regression models. The strategy deployed for sample acquisition incorporates a gradient-based search on the parametrized reduced order model, which involves analytical gradients obtained via adjoint sensitivity analysis. By adding the found optimum to the sample set, the sample set is iteratively refined. Results demonstrate robust convergence towards the global optimum but highlight the computational cost introduced by the gradient-based optimization. The probabilistic extensions seamlessly integrate into existing matrix-interpolatory reduction frameworks and enable the analytical calculation of gradients under uncertainty.