FedEMA-Distill: Exponential Moving Average Guided Knowledge Distillation for Robust Federated Learning
arXiv:2603.04422v1 Announce Type: new Abstract: Federated learning (FL) often degrades when clients hold heterogeneous non-Independent and Identically Distributed (non-IID) data and when some clients behave adversarially, leading to client drift, slow convergence, and high communication overhead. This paper proposes FedEMA-Distill, a server-side procedure that combines an exponential moving average (EMA) of the global model with ensemble knowledge distillation from client-uploaded prediction logits evaluated on a small public proxy dataset. Clients run standard local training, upload only compressed logits, and may use different model architectures, so no changes are required to client-side software while still supporting model heterogeneity across devices. Experiments on CIFAR-10, CIFAR-100, FEMNIST, and AG News under Dirichlet-0.1 label skew show that FedEMA-Distill improves top-1 accuracy by several percentage points (up to +5% on CIFAR-10 and +6% on CIFAR-100) over representative
arXiv:2603.04422v1 Announce Type: new Abstract: Federated learning (FL) often degrades when clients hold heterogeneous non-Independent and Identically Distributed (non-IID) data and when some clients behave adversarially, leading to client drift, slow convergence, and high communication overhead. This paper proposes FedEMA-Distill, a server-side procedure that combines an exponential moving average (EMA) of the global model with ensemble knowledge distillation from client-uploaded prediction logits evaluated on a small public proxy dataset. Clients run standard local training, upload only compressed logits, and may use different model architectures, so no changes are required to client-side software while still supporting model heterogeneity across devices. Experiments on CIFAR-10, CIFAR-100, FEMNIST, and AG News under Dirichlet-0.1 label skew show that FedEMA-Distill improves top-1 accuracy by several percentage points (up to +5% on CIFAR-10 and +6% on CIFAR-100) over representative baselines, reaches a given target accuracy in 30-35% fewer communication rounds, and reduces per-round client uplink payloads to 0.09-0.46 MB, i.e., roughly an order of magnitude less than transmitting full model weights. Using coordinate-wise median or trimmed-mean aggregation of logits at the server further stabilizes training in the presence of up to 10-20% Byzantine clients and yields well-calibrated predictions under attack. These results indicate that coupling temporal smoothing with logits-only aggregation provides a communication-efficient and attack-resilient FL pipeline that is deployment-friendly and compatible with secure aggregation and differential privacy, since only aggregated or obfuscated model outputs are exchanged.
Executive Summary
This study proposes FedEMA-Distill, a novel server-side procedure that combines exponential moving average (EMA) with ensemble knowledge distillation for robust federated learning. FedEMA-Distill addresses the challenges of heterogeneous non-IID data and adversarial clients by incorporating temporal smoothing and logits-only aggregation. Experiments on various datasets demonstrate significant improvements in top-1 accuracy, communication efficiency, and attack resilience. The approach is deployment-friendly, compatible with secure aggregation and differential privacy, and offers a promising solution for real-world federated learning applications. By leveraging EMA and knowledge distillation, FedEMA-Distill provides a robust and efficient framework for model aggregation, making it an attractive option for industries relying on federated learning. The research contributes to the ongoing efforts to develop more resilient and scalable FL pipelines.
Key Points
- ▸ FedEMA-Distill combines EMA with ensemble knowledge distillation for robust federated learning
- ▸ The approach addresses challenges of non-IID data and adversarial clients
- ▸ Experiments demonstrate significant improvements in top-1 accuracy, communication efficiency, and attack resilience
Merits
Improved Model Robustness
FedEMA-Distill provides a robust framework for model aggregation, capable of handling non-IID data and adversarial clients.
Enhanced Communication Efficiency
The approach reduces per-round client uplink payloads to 0.09-0.46 MB, making it a more communication-efficient solution.
Increased Deployment Friendliness
FedEMA-Distill is compatible with secure aggregation and differential privacy, making it a more deployment-friendly option.
Demerits
Potential Overreliance on EMA
The study's reliance on EMA as a primary component may limit the approach's adaptability to diverse federated learning scenarios.
Limited Exploration of Alternative Aggregation Methods
The study primarily focuses on EMA and logits-only aggregation, leaving limited exploration of alternative aggregation methods.
Expert Commentary
FedEMA-Distill represents a crucial advancement in the field of federated learning, addressing the pressing need for robust and efficient model aggregation. By leveraging EMA and knowledge distillation, the approach provides a comprehensive solution to the challenges posed by non-IID data and adversarial clients. While the study's reliance on EMA and limited exploration of alternative aggregation methods are notable limitations, the results demonstrate the approach's potential for real-world applications. As the field continues to evolve, FedEMA-Distill serves as a valuable contribution, offering insights into the development of more resilient and scalable FL pipelines.
Recommendations
- ✓ Future studies should explore the adaptability of FedEMA-Distill to diverse federated learning scenarios and investigate alternative aggregation methods.
- ✓ The approach should be further evaluated in real-world applications to demonstrate its practical effectiveness and scalability.