XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
2026-04-10 • Cryptography and Security
Cryptography and SecurityArtificial IntelligenceDistributed, Parallel, and Cluster ComputingMachine Learning
AI summaryⓘ
The authors explain that in Federated Learning (FL), hackers usually need to work together and share information to successfully mess up the learning process. They ask if attacks can still work when hackers act alone without talking to each other. To test this, they create a new attack method called XFED, where each bad actor works independently but still manages to fool the system. Their experiments show XFED beats many existing protections and attacks, suggesting FL systems might be easier to trick than we thought.
Federated LearningModel Poisoning AttackNon-collusive AttackAdversarial ClientsAggregation-agnosticServer-side DefensesMalicious UpdatesBotnetSecurity ThreatsMachine Learning Security
Authors
Israt Jahan Mouri, Muhammad Ridowan, Muhammad Abdullah Adnan
Abstract
Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This approach is costly to maintain and highly vulnerable to detection. This context raises a fundamental question: Can model poisoning attacks remain effective without any communication between attackers? To address this challenge, we introduce and formalize the \textbf{non-collusive attack model}, in which all compromised clients share a common adversarial objective but operate independently. Under this model, each attacker generates its malicious update without communicating with other adversaries, accessing other clients' updates, or relying on any knowledge of server-side defenses. To demonstrate the feasibility of this threat model, we propose \textbf{XFED}, the first aggregation-agnostic, non-collusive model poisoning attack. Our empirical evaluation across six benchmark datasets shows that XFED bypasses eight state-of-the-art defenses and outperforms six existing model poisoning attacks. These findings indicate that FL systems are substantially less secure than previously believed and underscore the urgent need for more robust and practical defense mechanisms.