ARMOR: Adaptive Resilience Against Model Poisoning Attacks in Continual Federated Learning for Mobile Indoor Localization
arXiv:2603.19594v1 Announce Type: new Abstract: Indoor localization has become increasingly essential for applications ranging from asset tracking to delivering personalized services. Federated learning (FL) offers a privacy-preserving approach by training a centralized global model (GM) using distributed data from mobile devices without sharing raw data. However, real-world deployments require a continual federated learning (CFL) setting, where the GM receives continual updates under device heterogeneity and evolving indoor environments. In such dynamic conditions, erroneous or biased updates can cause the GM to deviate from its expected learning trajectory, gradually degrading internal GM representations and GM localization performance. This vulnerability is further exacerbated by adversarial model poisoning attacks. To address this challenge, we propose ARMOR, a novel CFL-based framework that monitors and safeguards the GM during continual updates. ARMOR introduces a novel state-sp
arXiv:2603.19594v1 Announce Type: new Abstract: Indoor localization has become increasingly essential for applications ranging from asset tracking to delivering personalized services. Federated learning (FL) offers a privacy-preserving approach by training a centralized global model (GM) using distributed data from mobile devices without sharing raw data. However, real-world deployments require a continual federated learning (CFL) setting, where the GM receives continual updates under device heterogeneity and evolving indoor environments. In such dynamic conditions, erroneous or biased updates can cause the GM to deviate from its expected learning trajectory, gradually degrading internal GM representations and GM localization performance. This vulnerability is further exacerbated by adversarial model poisoning attacks. To address this challenge, we propose ARMOR, a novel CFL-based framework that monitors and safeguards the GM during continual updates. ARMOR introduces a novel state-space model (SSM) that learns the historical evolution of GM weight tensors and predicts the expected next state of weight tensors of the GM. By comparing incoming local updates with this SSM projection, ARMOR detects deviations and selectively mitigates corrupted updates before local updates are aggregated with the GM. This mechanism enables robust adaptation to temporal environmental dynamics and mitigate the effects of model poisoning attacks while preventing GM corruption. Experimental evaluations in real-world conditions indicate that ARMOR achieves notable improvements, with up to 8.0x reduction in mean error and 4.97x reduction in worst-case error compared to state-of-the-art indoor localization frameworks, demonstrating strong resilience against model corruption tested using real-world data and mobile devices.
Executive Summary
This article proposes ARMOR, a novel framework for adaptive resilience against model poisoning attacks in continual federated learning for mobile indoor localization. ARMOR introduces a state-space model that learns the historical evolution of global model weight tensors and predicts the expected next state. By comparing incoming local updates with this projection, ARMOR detects deviations and selectively mitigates corrupted updates. Experimental evaluations demonstrate notable improvements in mean and worst-case error reduction compared to state-of-the-art indoor localization frameworks. The framework's ability to adapt to temporal environmental dynamics and mitigate model poisoning attacks makes it a strong contender for real-world applications. However, further research is needed to assess its scalability and robustness in large-scale deployments.
Key Points
- ▸ ARMOR proposes a novel framework for adaptive resilience against model poisoning attacks in CFL
- ▸ ARMOR uses a state-space model to learn and predict the expected next state of global model weight tensors
- ▸ The framework selectively mitigates corrupted updates to prevent GM corruption and improve localization performance
Merits
Improved Robustness
ARMOR demonstrates notable improvements in mean and worst-case error reduction compared to state-of-the-art indoor localization frameworks
Adaptability
The framework's ability to adapt to temporal environmental dynamics makes it suitable for real-world applications
Scalability
ARMOR's design allows for selective mitigation of corrupted updates, which can prevent GM corruption and improve localization performance
Demerits
Scalability Limitations
Further research is needed to assess ARMOR's scalability and robustness in large-scale deployments
Dependence on Historical Data
ARMOR's performance may degrade if the historical data used to train the state-space model is outdated or incomplete
Computational Complexity
The framework's computational complexity may be high due to the need to continually update and predict the expected next state of global model weight tensors
Expert Commentary
The article proposes a novel framework for adaptive resilience against model poisoning attacks in CFL. While the framework demonstrates notable improvements in mean and worst-case error reduction, further research is needed to assess its scalability and robustness in large-scale deployments. The use of a state-space model to learn and predict the expected next state of global model weight tensors is an innovative approach that can be applied to other areas of machine learning. However, the framework's dependence on historical data and computational complexity are potential limitations that need to be addressed.
Recommendations
- ✓ Further research is needed to assess ARMOR's scalability and robustness in large-scale deployments
- ✓ The framework's design should be modified to reduce its dependence on historical data and computational complexity
Sources
Original: arXiv - cs.LG