Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning

2026-02-19Machine Learning

Machine LearningDistributed, Parallel, and Cluster Computing
AI summary

The authors look at a way to train machine learning models across many clients without sharing their raw data, called federated learning. They focus on a method called U-shaped federated split learning (UFSL), which reduces the work done by clients but still risks leaking private data through the shared intermediate information. To fix this, they propose KD-UFSL, combining techniques like microaggregation and differential privacy to hide sensitive information better. Their results show that KD-UFSL makes it harder for attackers to reconstruct private data while keeping the main model accurate enough for real use.

Federated LearningSplit LearningPrivacyDifferential PrivacyMicroaggregationData Reconstruction AttackU-shaped Federated Split LearningIntermediate RepresentationsMachine LearningData Privacy
Authors
Obaidullah Zaland, Sajib Mistry, Monowar Bhuyan
Abstract
Big data scenarios, where massive, heterogeneous datasets are distributed across clients, demand scalable, privacy-preserving learning methods. Federated learning (FL) enables decentralized training of machine learning (ML) models across clients without data centralization. Decentralized training, however, introduces a computational burden on client devices. U-shaped federated split learning (UFSL) offloads a fraction of the client computation to the server while keeping both data and labels on the clients' side. However, the intermediate representations (i.e., smashed data) shared by clients with the server are prone to exposing clients' private data. To reduce exposure of client data through intermediate data representations, this work proposes k-anonymous differentially private UFSL (KD-UFSL), which leverages privacy-enhancing techniques such as microaggregation and differential privacy to minimize data leakage from the smashed data transferred to the server. We first demonstrate that an adversary can access private client data from intermediate representations via a data-reconstruction attack, and then present a privacy-enhancing solution, KD-UFSL, to mitigate this risk. Our experiments indicate that, alongside increasing the mean squared error between the actual and reconstructed images by up to 50% in some cases, KD-UFSL also decreases the structural similarity between them by up to 40% on four benchmarking datasets. More importantly, KD-UFSL improves privacy while preserving the utility of the global model. This highlights its suitability for large-scale big data applications where privacy and utility must be balanced.