Structure-Aware Distributed Backdoor Attacks in Federated Learning
arXiv:2603.03865v1 Announce Type: new Abstract: While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing studies on backdoor attacks in federated learning mainly focus on trigger design or poisoning strategies, typically assuming that identical perturbations behave similarly across different model architectures. This assumption overlooks the impact of model structure on perturbation effectiveness. From a structure-aware perspective, this paper analyzes the coupling relationship between model architectures and backdoor perturbations. We introduce two metrics, Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC), to measure a model's sensitivity to perturbations and its preference for fractal perturbations. Based on these metrics, we develop a structure-aware fractal perturbation injection framework (TFI) to study the role of architectural properties in the backdoor inj
arXiv:2603.03865v1 Announce Type: new Abstract: While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing studies on backdoor attacks in federated learning mainly focus on trigger design or poisoning strategies, typically assuming that identical perturbations behave similarly across different model architectures. This assumption overlooks the impact of model structure on perturbation effectiveness. From a structure-aware perspective, this paper analyzes the coupling relationship between model architectures and backdoor perturbations. We introduce two metrics, Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC), to measure a model's sensitivity to perturbations and its preference for fractal perturbations. Based on these metrics, we develop a structure-aware fractal perturbation injection framework (TFI) to study the role of architectural properties in the backdoor injection process. Experimental results show that model architecture significantly influences the propagation and aggregation of perturbations. Networks with multi-path feature fusion can amplify and retain fractal perturbations even under low poisoning ratios, while models with low structural compatibility constrain their effectiveness. Further analysis reveals a strong correlation between SCC and attack success rate, suggesting that SCC can predict perturbation survivability. These findings highlight that backdoor behaviors in federated learning depend not only on perturbation design or poisoning intensity but also on the interaction between model architecture and aggregation mechanisms, offering new insights for structure-aware defense design.
Executive Summary
The article discusses the impact of model architecture on backdoor attacks in federated learning. It introduces two metrics, Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC), to measure a model's sensitivity to perturbations. The study reveals that model architecture significantly influences the propagation and aggregation of perturbations, with networks having multi-path feature fusion amplifying fractal perturbations. The findings highlight the importance of considering model architecture in backdoor defense design, offering new insights for structure-aware defense mechanisms.
Key Points
- ▸ Model architecture significantly influences backdoor attack effectiveness in federated learning
- ▸ Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC) metrics measure model sensitivity to perturbations
- ▸ Networks with multi-path feature fusion can amplify fractal perturbations even under low poisoning ratios
Merits
Novel Metrics
The introduction of SRS and SCC metrics provides a new framework for evaluating model sensitivity to perturbations, enabling more effective defense mechanisms.
Comprehensive Analysis
The study offers a thorough analysis of the relationship between model architecture and backdoor attacks, shedding light on the importance of structural properties in federated learning.
Demerits
Limited Generalizability
The study's findings may not generalize to all types of model architectures or federated learning scenarios, potentially limiting the applicability of the proposed metrics and framework.
Expert Commentary
The article makes a significant contribution to the field of federated learning by highlighting the critical role of model architecture in backdoor attacks. The proposed SRS and SCC metrics offer a valuable framework for evaluating model sensitivity to perturbations, enabling more effective defense mechanisms. However, further research is needed to generalize the findings to a broader range of model architectures and federated learning scenarios. The study's emphasis on explainability and transparency in AI systems also underscores the importance of developing more robust and accountable AI systems.
Recommendations
- ✓ Further research into the application of SRS and SCC metrics to various model architectures and federated learning scenarios
- ✓ The development of structure-aware defense mechanisms that incorporate the proposed metrics and framework
- ✓ The establishment of guidelines and regulations for the development and deployment of federated learning systems, with a focus on security and transparency