Off-The-Shelf Image-to-Image Models Are All You Need To Defeat Image Protection Schemes
2026-02-25 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial Intelligence
AI summaryⓘ
The authors show that common AI image models can be used to remove secret protective changes added to images meant to stop misuse like fake image creation. Instead of special hacks, these regular AI tools can erase many kinds of protections simply by using text prompts. They tested this on several protection methods and found their approach works better while keeping the image usable. This means current image protections might not be as safe as people think, and new defenses should consider these AI attacks when designed.
Generative AIProtective perturbationsImage-to-image modelsDeepfakeStyle mimicryDenoisingAdversarial attacksImage protectionBenchmarkingOff-the-shelf models
Authors
Xavier Pleimling, Sifat Muhammad Abdullah, Gunjan Balde, Peng Gao, Mainack Mondal, Murtuza Jadliwala, Bimal Viswanath
Abstract
Advances in Generative AI (GenAI) have led to the development of various protection strategies to prevent the unauthorized use of images. These methods rely on adding imperceptible protective perturbations to images to thwart misuse such as style mimicry or deepfake manipulations. Although previous attacks on these protections required specialized, purpose-built methods, we demonstrate that this is no longer necessary. We show that off-the-shelf image-to-image GenAI models can be repurposed as generic ``denoisers" using a simple text prompt, effectively removing a wide range of protective perturbations. Across 8 case studies spanning 6 diverse protection schemes, our general-purpose attack not only circumvents these defenses but also outperforms existing specialized attacks while preserving the image's utility for the adversary. Our findings reveal a critical and widespread vulnerability in the current landscape of image protection, indicating that many schemes provide a false sense of security. We stress the urgent need to develop robust defenses and establish that any future protection mechanism must be benchmarked against attacks from off-the-shelf GenAI models. Code is available in this repository: https://github.com/mlsecviswanath/img2imgdenoiser