Trust Region Constrained Bayesian Optimization with Penalized Constraint Handling
2026-03-25 • Machine Learning
Machine Learning
AI summaryⓘ
The authors developed a new way to solve difficult optimization problems where you have many inputs, limited information, and rules that must be followed. They turned the problem into one without rules by adding penalties when rules are broken, then focused the search on small areas near the best solutions so far. Using this approach helps find good solutions more efficiently and reliably. When tested, their method worked better or as well as others while using fewer tries.
Bayesian optimizationconstrained optimizationblack-box optimizationpenalty methodtrust regionsurrogate modelexpected improvementfeasibility regionhigh-dimensional optimization
Authors
Raju Chowdhury, Tanmay Sen, Prajamitra Bhuyan, Biswabrata Pradhan
Abstract
Constrained optimization in high-dimensional black-box settings is difficult due to expensive evaluations, the lack of gradient information, and complex feasibility regions. In this work, we propose a Bayesian optimization method that combines a penalty formulation, a surrogate model, and a trust region strategy. The constrained problem is converted to an unconstrained form by penalizing constraint violations, which provides a unified modeling framework. A trust region restricts the search to a local region around the current best solution, which improves stability and efficiency in high dimensions. Within this region, we use the Expected Improvement acquisition function to select evaluation points by balancing improvement and uncertainty. The proposed Trust Region method integrates penalty-based constraint handling with local surrogate modeling. This combination enables efficient exploration of feasible regions while maintaining sample efficiency. We compare the proposed method with state-of-the-art methods on synthetic and real-world high-dimensional constrained optimization problems. The results show that the method identifies high-quality feasible solutions with fewer evaluations and maintains stable performance across different settings.