Privacy-preserving and Trusted Machine Learning:
Edge AI and Federated Learning for collectively trainning models
State of art: Edge AI (or “edge intelligence”), being the intersection between edge computing and AI, has attracted significant interest in recent years (leading to creation of foundations like “tinyML”), due to factors like advances in hardware (especially mobile and IOT devices) that enable applications based on deep learning to run on edge devices, advances in AI that allow the distillation of large models into smaller ones (parameter efficient neural networks) – without significant losses in their accuracy – enhancing applicability in domains with limited computational resources like edge devices, and of course efficiency (as in low latency and bandwidth requirements). At the same time, the main motivation behind edge AI in several domains of applications is privacy preservation and security, (as collected data storage is located where the actual analysis happens and does not leave the device), often important factors that enable trustworthiness. An important advantage of edge AI is that it is a perfect enabler for federated learning and swarm intelligence, either through reinforcement learning or online/continuous learning.
Challenge: Edge AI presents some important benefits and opportunities that TITAN ambitions to capitalise upon. At the same time, most of the tools TITAN will integrate in its ecosystem rely on traditional machine learning approaches, requiring transformation to support Edge AI. In addition, TITAN addresses the citizens, constituting privacy preservation and security major factors; TITAN also wants to exploit implicit and explicit user feedback in improving its solution. Federated (decentralised) learning, although challenging, provides an opportunity for collectively training models without the need for the data to leave the edge device.
Going beyond: There are several technologies that TITAN will explore for transforming a selected set of tools into edge AI tools. Learning parameter efficient neural networks like “MobileNets”, “SqueezeNet”, pruning and truncation, and distillation – training smaller networks using larger networks as “teachers” – are all viable approaches for model transformation, along with the facilities provided by TensorFlow Lite to convert a TensorFlow model for on-device inference. Regarding federated learning, TITAN will study approaches that involve parameter servers and to a lesser degree asynchronous SGD.