Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorFell, Christina
dc.contributor.authorMohammadi, Mahnaz
dc.contributor.authorMorrison, David
dc.contributor.authorArandjelovic, Ognjen
dc.contributor.authorCaie, Peter
dc.contributor.authorHarris-Birtill, David
dc.date.accessioned2022-12-05T10:30:09Z
dc.date.available2022-12-05T10:30:09Z
dc.date.issued2022-12-02
dc.identifier282443908
dc.identifierd5e0e037-75d1-431c-9410-ab66ef085114
dc.identifier.citationFell , C , Mohammadi , M , Morrison , D , Arandjelovic , O , Caie , P & Harris-Birtill , D 2022 , ' Reproducibility of deep learning in digital pathology whole slide image analysis ' , PLOS Digital Health , vol. 1 , no. 12 , e0000145 . https://doi.org/10.1371/journal.pdig.0000145en
dc.identifier.issn2767-3170
dc.identifier.otherRIS: urn:B39115068696B0F5F768476BEF864970
dc.identifier.otherORCID: /0000-0002-0740-3668/work/124490185
dc.identifier.otherORCID: /0000-0001-5502-9773/work/136696661
dc.identifier.urihttps://hdl.handle.net/10023/26542
dc.descriptionFunding: This work is supported by the Industrial Centre for AI Research in digital Diagnostics (iCAIRD) which is funded by Innovate UK on behalf of UK Research and Innovation (UKRI) [project number: 104690], and in part by Chief Scientist Office, Scotland.en
dc.description.abstractFor a method to be widely adopted in medical research or clinical practice, it needs to be reproducible so that clinicians and regulators can have confidence in its use. Machine learning and deep learning have a particular set of challenges around reproducibility. Small differences in the settings or the data used for training a model can lead to large differences in the outcomes of experiments. In this work, three top-performing algorithms from the Camelyon grand challenges are reproduced using only information presented in the associated papers and the results are then compared to those reported. Seemingly minor details were found to be critical to performance and yet their importance is difficult to appreciate until the actual reproduction is attempted. We observed that authors generally describe the key technical aspects of their models well but fail to maintain the same reporting standards when it comes to data preprocessing which is essential to reproducibility. As an important contribution of the present study and its findings, we introduce a reproducibility checklist that tabulates information that needs to be reported in histopathology ML-based work in order to make it reproducible.
dc.format.extent21
dc.format.extent694372
dc.language.isoeng
dc.relation.ispartofPLOS Digital Healthen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectRB Pathologyen
dc.subjectDASen
dc.subjectMCCen
dc.subject.lccQA75en
dc.subject.lccRBen
dc.titleReproducibility of deep learning in digital pathology whole slide image analysisen
dc.typeJournal articleen
dc.contributor.sponsorTechnology Strategy Boarden
dc.contributor.institutionUniversity of St Andrews. School of Medicineen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.contributor.institutionUniversity of St Andrews. Centre for Research into Ecological & Environmental Modellingen
dc.identifier.doi10.1371/journal.pdig.0000145
dc.description.statusPeer revieweden
dc.identifier.grantnumberTS/S013121/1en


This item appears in the following Collection(s)

Show simple item record