Balancing Privacy and Security: Navigating the Future of Federated Learning and AI
By Armin Shokri Kalisa and Robbert Schravendijk
Introduction
Apple, Microsoft, and Google are ushering in an era of artificially intelligent (AI) smartphones and computers designed to automate tasks such as photo editing and sending birthday greetings (B.X. Chen, 2024). However, to enable these features, they require access to more user data. In this new approach, Windows computers will frequently take screenshots of user activities, iPhones will compile information from various apps, and Android phones will listen to calls in real-time to detect scams. This raises the question: Are you willing to share this level of personal information? The ongoing boom in artificial intelligence (AI) is gradually infiltrating more and more applications. This, in turn, raises privacy concerns regarding the vast amounts of data required to train these AI models. One of the proposed solutions is to decentralize learning by allowing each device to train a model locally on its own data without sharing it. These local models are then aggregated to form a new global model. This privacy-friendly framework, called Federated Learning (B. McMahan et al., 2017) has been introduced to address this problem. While this new framework is very useful for a future in which AI models can be trained in a more privacy-friendly manner, it does not guarantee security from attacks. Based on the works of A. Shokri Kalisa, this article covers how attackers can use backdoor attacks to poison the model resulting from FL and what steps can be taken to make it more robust against these attacks.
[....]