BarrierSteer: LLM Safety via Learning Barrier Steering
2026-02-23 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors present BarrierSteer, a new method to make large language models safer by stopping them from creating harmful or unsafe responses during use. Their technique uses math-based rules called Control Barrier Functions applied inside the model’s internal workings to detect and block unsafe outputs without changing the model itself. They show that this approach works well, cutting down risky responses and beating other safety methods. The method is both practical and backed up by solid theory.
large language modelsadversarial attacksunsafe contentControl Barrier Functionslatent spaceresponse safetyconstraint mergingmodel steeringinferencesafety mechanisms
Authors
Thanh Q. Tran, Arun Verma, Kiwan Wong, Bryan Kian Hsiang Low, Daniela Rus, Wei Xiao
Abstract
Despite the state-of-the-art performance of large language models (LLMs) across diverse tasks, their susceptibility to adversarial attacks and unsafe content generation remains a major obstacle to deployment, particularly in high-stakes settings. Addressing this challenge requires safety mechanisms that are both practically effective and supported by rigorous theory. We introduce BarrierSteer, a novel framework that formalizes response safety by embedding learned non-linear safety constraints directly into the model's latent representation space. BarrierSteer employs a steering mechanism based on Control Barrier Functions (CBFs) to efficiently detect and prevent unsafe response trajectories during inference with high precision. By enforcing multiple safety constraints through efficient constraint merging, without modifying the underlying LLM parameters, BarrierSteer preserves the model's original capabilities and performance. We provide theoretical results establishing that applying CBFs in latent space offers a principled and computationally efficient approach to enforcing safety. Our experiments across multiple models and datasets show that BarrierSteer substantially reduces adversarial success rates, decreases unsafe generations, and outperforms existing methods.