Dual-Masking Framework against Two-Sided Model Attacks in Federated Learning
Journal
Proceedings - IEEE Global Communications Conference, GLOBECOM
ISBN
978-1-7281-8104-2
Date Issued
2021-01-01
Author(s)
Abstract
With the popularity of AIoT (Artificial Intelligence of Things) services, we can foresee that smart end devices will generate tremendous user data at the edge. In particular, it is critical to address how to properly distill knowledge from the edge network in a communication-efficient and privacy-preserving manner. Federated learning (FL), one of the promising machine learning frameworks, ensures data privacy by allowing end devices to collaboratively train a shared model without exposing raw data to an aggregation server. However, due to its distributed nature, the framework is vulnerable to two major threats: the Model Inversion Attacks and the Model Poisoning Attacks. An abnormal aggregator or malicious end devices may probably launch these attacks in the training phase. The former leaks sensitive information by reversing the model weights to users' raw data. Still, the latter can break the model security and mislead the global model to wrong inference results. Unfortunately, the existing research has not tackled such two-sided model attacks that occurred concurrently in FL. Therefore, in this paper, we propose a dual-masking federated learning (DMFL) framework that advocates partial weights uploading in the aggregation process and applies two kinds of masks on both the end device and the aggregator sides. Based on the benchmark data for image classification, our experimental results show that the proposed DMFL framework outperforms other baselines, confirming that it can successfully preserve weights privacy and protect model security for AIoT.
Subjects
Dual-Masking Framework | Federated Learning | Model Inversion Attacks | Model Poisoning Attacks | Model Security | Weights Privacy
Type
conference paper
