Show simple item record

Files in this item


Item metadata

dc.contributor.authorFell, Christina
dc.contributor.authorMohammadi, Mahnaz
dc.contributor.authorMorrison, David
dc.contributor.authorArandjelovic, Ognjen
dc.contributor.authorCaie, Peter
dc.contributor.authorHarris-Birtill, David
dc.identifier.citationFell , C , Mohammadi , M , Morrison , D , Arandjelovic , O , Caie , P & Harris-Birtill , D 2022 , ' Reproducibility of deep learning in digital pathology whole slide image analysis ' , PLOS Digital Health , vol. 1 , no. 12 , e0000145 .
dc.identifier.otherPURE: 282443908
dc.identifier.otherPURE UUID: d5e0e037-75d1-431c-9410-ab66ef085114
dc.identifier.otherRIS: urn:B39115068696B0F5F768476BEF864970
dc.identifier.otherORCID: /0000-0002-0740-3668/work/124490185
dc.identifier.otherORCID: /0000-0001-5502-9773/work/136696661
dc.descriptionFunding: This work is supported by the Industrial Centre for AI Research in digital Diagnostics (iCAIRD) which is funded by Innovate UK on behalf of UK Research and Innovation (UKRI) [project number: 104690], and in part by Chief Scientist Office, Scotland.en
dc.description.abstractFor a method to be widely adopted in medical research or clinical practice, it needs to be reproducible so that clinicians and regulators can have confidence in its use. Machine learning and deep learning have a particular set of challenges around reproducibility. Small differences in the settings or the data used for training a model can lead to large differences in the outcomes of experiments. In this work, three top-performing algorithms from the Camelyon grand challenges are reproduced using only information presented in the associated papers and the results are then compared to those reported. Seemingly minor details were found to be critical to performance and yet their importance is difficult to appreciate until the actual reproduction is attempted. We observed that authors generally describe the key technical aspects of their models well but fail to maintain the same reporting standards when it comes to data preprocessing which is essential to reproducibility. As an important contribution of the present study and its findings, we introduce a reproducibility checklist that tabulates information that needs to be reported in histopathology ML-based work in order to make it reproducible.
dc.relation.ispartofPLOS Digital Healthen
dc.rightsCopyright: © 2022 Fell et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.en
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectRB Pathologyen
dc.titleReproducibility of deep learning in digital pathology whole slide image analysisen
dc.typeJournal articleen
dc.contributor.sponsorTechnology Strategy Boarden
dc.description.versionPublisher PDFen
dc.contributor.institutionUniversity of St Andrews. School of Medicineen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.contributor.institutionUniversity of St Andrews. Centre for Research into Ecological & Environmental Modellingen
dc.description.statusPeer revieweden

This item appears in the following Collection(s)

Show simple item record