Coherence-Aware Over-the-Air Distributed Learning under Heterogeneous Link Impairments

2026-03-09Information Theory

Information Theory
AI summary

The authors address challenges in distributed machine learning over wireless networks caused by devices having different speeds and environments, which affect communication quality. They propose a new method that smartly mixes signals to reduce wasted communication and improve accuracy, especially for devices with unstable connections. Their approach also reuses earlier updates to handle partial data reception. They prove their method works well even with imperfect information and show through tests that it is more efficient and faster than traditional methods.

federated learningchannel state informationover-the-air aggregationcoherence timeorthogonal frequency-division multiplexingpilot tonescommunication efficiencypartial model receptionwireless networksaggregation noise
Authors
Mehdi Karbalayghareh, David J. Love, Christopher G. Brinton
Abstract
Distributed machine learning (ML) over wireless networks hinges on accurate channel state information (CSI) and efficient exchange of high-dimensional model updates. These demands are governed by channel coherence time and bandwidth, which vary across devices (links) due to heterogeneous mobility and scattering, causing degraded downlink delivery and distorted uplink over-the-air (OTA) aggregation. We propose a coherence-aware federated learning (FL) framework that jointly addresses impairments on downlink and uplink with communication-efficient strategies. In the downlink, we employ product superposition to multiplex global model symbols for long-coherence (static) devices onto the pilot tones required by short-coherence (dynamic) devices for channel estimation, turning pilot overhead into payload while preserving estimation fidelity. In the proposed scheme, an orthogonal frequency-division multiplexing (OFDM) super-block is partitioned into sub-blocks aligned with the smallest coherence time and bandwidth, enabling consistent channel estimation and stabilizing OTA aggregation across heterogeneous devices. Partial model reception at dynamic devices is mitigated via previous local model filling (PLMF), which reuses prior updates. We establish convergence guarantees under heterogeneous link impairments, imperfect CSI, and aggregation noise. The proposed framework enables efficient scheduling under coherence heterogeneity; analysis and experiments demonstrate notable gains in communication efficiency, latency, and learning accuracy over conventional FL baselines.