Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorUllah, Rehmat
dc.contributor.authorWu, Di
dc.contributor.authorHarvey, Paul
dc.contributor.authorKilpatrick, Peter
dc.contributor.authorSpence, Ivor
dc.contributor.authorVarghese, Blesson
dc.date.accessioned2022-07-20T12:30:01Z
dc.date.available2022-07-20T12:30:01Z
dc.date.issued2022-11
dc.identifier280525610
dc.identifieree0bd79e-48f8-4ef2-82cb-089a34895d79
dc.identifier85135760268
dc.identifier.citationUllah , R , Wu , D , Harvey , P , Kilpatrick , P , Spence , I & Varghese , B 2022 , ' FedFly : towards migration in edge-based distributed federated learning ' , IEEE Communications Magazine , vol. 60 , no. 10 , pp. 42-48 . https://doi.org/10.1109/mcom.003.2100964en
dc.identifier.issn0163-6804
dc.identifier.urihttps://hdl.handle.net/10023/25671
dc.description.abstractFederated learning (FL) is a privacy-preserving distributed machine learning technique that trains models while keeping all the original data generated on devices locally. Since devices may be resource constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training. Our empirical results on the CIFAR10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL. FedFly has negligible overhead of up to two seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation.
dc.format.extent7
dc.format.extent1061059
dc.language.isoeng
dc.relation.ispartofIEEE Communications Magazineen
dc.subjectFederated learningen
dc.subjectEdge computingen
dc.subjectDeep neural networksen
dc.subjectDistributed machine learningen
dc.subjectInternet-of-Thingsen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectQA76 Computer softwareen
dc.subjectDASen
dc.subject.lccQA75en
dc.subject.lccQA76en
dc.titleFedFly : towards migration in edge-based distributed federated learningen
dc.typeJournal articleen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.identifier.doihttps://doi.org/10.1109/mcom.003.2100964
dc.description.statusPeer revieweden
dc.identifier.urlhttps://github.com/qub-blesson/FedFlyen


This item appears in the following Collection(s)

Show simple item record