Helgi I. Ingolfsson, Chris Neale, et al.
PNAS
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge. Heterogeneous hardware, unreliable client devices, and energy constraints often characterize edge computing systems. In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities. We focus on computational and communication bottlenecks, client behavior, and data security implications. Our experiments with models varying from 14K to 80M trainable parameters are carried out on dedicated hardware with emulated network characteristics and client behavior. We find that state-of-the-art embedded hardware has significant memory bottlenecks, leading to longer processing times than on modern data center GPUs.
Helgi I. Ingolfsson, Chris Neale, et al.
PNAS
Divyansh Jhunjhunwala, Shiqiang Wang, et al.
ICLR 2023
Christopher Giblin, Sean Rooney, et al.
BigData Congress 2021
Romeo Kienzler, Johannes Schmude, et al.
Big Data 2023