Boundary Point Jailbreaking of Black-Box LLMs
2026-02-16 • Machine Learning
Machine Learning
AI summaryⓘ
The authors created a new way to trick large language models (LLMs) that have strong protections against harmful content. Their method, called Boundary Point Jailbreaking (BPJ), only needs to know if the model's safety system flags a response or not, without any inside information. BPJ works by breaking down the harmful content into smaller steps and carefully testing changes to find the best way to bypass defenses. This technique succeeded where others failed, even against advanced systems like GPT-5's safety classifiers. The authors also found defending against BPJ needs more than just checking one interaction at a time; it requires looking at patterns over multiple attempts.
Large Language ModelsJailbreakingAdversarial AttacksBlack-box AttackClassifierConstitutional ClassifiersRed TeamingGPT-5Safety SystemsBatch-level Monitoring
Authors
Xander Davies, Giorgi Giglemiani, Edmund Lau, Eric Winsor, Geoffrey Irving, Yarin Gal
Abstract
Frontier LLMs are safeguarded against attempts to extract harmful information via adversarial prompts known as "jailbreaks". Recently, defenders have developed classifier-based systems that have survived thousands of hours of human red teaming. We introduce Boundary Point Jailbreaking (BPJ), a new class of automated jailbreak attacks that evade the strongest industry-deployed safeguards. Unlike previous attacks that rely on white/grey-box assumptions (such as classifier scores or gradients) or libraries of existing jailbreaks, BPJ is fully black-box and uses only a single bit of information per query: whether or not the classifier flags the interaction. To achieve this, BPJ addresses the core difficulty in optimising attacks against robust real-world defences: evaluating whether a proposed modification to an attack is an improvement. Instead of directly trying to learn an attack for a target harmful string, BPJ converts the string into a curriculum of intermediate attack targets and then actively selects evaluation points that best detect small changes in attack strength ("boundary points"). We believe BPJ is the first fully automated attack algorithm that succeeds in developing universal jailbreaks against Constitutional Classifiers, as well as the first automated attack algorithm that succeeds against GPT-5's input classifier without relying on human attack seeds. BPJ is difficult to defend against in individual interactions but incurs many flags during optimisation, suggesting that effective defence requires supplementing single-interaction methods with batch-level monitoring.