Files in this item
FedFly : towards migration in edge-based distributed federated learning
Item metadata
dc.contributor.author | Ullah, Rehmat | |
dc.contributor.author | Wu, Di | |
dc.contributor.author | Harvey, Paul | |
dc.contributor.author | Kilpatrick, Peter | |
dc.contributor.author | Spence, Ivor | |
dc.contributor.author | Varghese, Blesson | |
dc.date.accessioned | 2022-07-20T12:30:01Z | |
dc.date.available | 2022-07-20T12:30:01Z | |
dc.date.issued | 2022-11 | |
dc.identifier | 280525610 | |
dc.identifier | ee0bd79e-48f8-4ef2-82cb-089a34895d79 | |
dc.identifier | 85135760268 | |
dc.identifier.citation | Ullah , R , Wu , D , Harvey , P , Kilpatrick , P , Spence , I & Varghese , B 2022 , ' FedFly : towards migration in edge-based distributed federated learning ' , IEEE Communications Magazine , vol. 60 , no. 10 , pp. 42-48 . https://doi.org/10.1109/mcom.003.2100964 | en |
dc.identifier.issn | 0163-6804 | |
dc.identifier.uri | https://hdl.handle.net/10023/25671 | |
dc.description.abstract | Federated learning (FL) is a privacy-preserving distributed machine learning technique that trains models while keeping all the original data generated on devices locally. Since devices may be resource constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training. Our empirical results on the CIFAR10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL. FedFly has negligible overhead of up to two seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation. | |
dc.format.extent | 7 | |
dc.format.extent | 1061059 | |
dc.language.iso | eng | |
dc.relation.ispartof | IEEE Communications Magazine | en |
dc.subject | Federated learning | en |
dc.subject | Edge computing | en |
dc.subject | Deep neural networks | en |
dc.subject | Distributed machine learning | en |
dc.subject | Internet-of-Things | en |
dc.subject | QA75 Electronic computers. Computer science | en |
dc.subject | QA76 Computer software | en |
dc.subject | DAS | en |
dc.subject.lcc | QA75 | en |
dc.subject.lcc | QA76 | en |
dc.title | FedFly : towards migration in edge-based distributed federated learning | en |
dc.type | Journal article | en |
dc.contributor.institution | University of St Andrews. School of Computer Science | en |
dc.identifier.doi | https://doi.org/10.1109/mcom.003.2100964 | |
dc.description.status | Peer reviewed | en |
dc.identifier.url | https://github.com/qub-blesson/FedFly | en |
This item appears in the following Collection(s)
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.