Files in this item
Model agnostic interpretability
Item metadata
dc.contributor.advisor | Arandjelović, Ognjen | |
dc.contributor.advisor | Harrison, David James | |
dc.contributor.author | Rumbelow, Jessica | |
dc.coverage.spatial | 187 | en_US |
dc.date.accessioned | 2024-08-30T08:47:59Z | |
dc.date.available | 2024-08-30T08:47:59Z | |
dc.date.issued | 2024-12-03 | |
dc.identifier.uri | https://hdl.handle.net/10023/30440 | |
dc.description.abstract | This thesis explores the development and application of model-agnostic interpretability methods for deep neural networks. I introduce novel techniques for interpreting trained models irrespective of their architecture, including Centroid Maximisation, an adaptation of feature visualisation for segmentation models; the Proxy Model Test, a new evaluation method for saliency mapping algorithms; and Hierarchical Perturbation (HiPe), a novel saliency mapping algorithm that achieves performance comparable to existing model-agnostic methods while reducing computational cost by a factor of 20. The utility of these interpretability methods is demonstrated through two case studies in digital pathology. The first study applies model-agnostic saliency mapping to generate pixel-level segmentations from weakly-supervised models, while the second study employs interpretability techniques to uncover potential relationships between DNA morphology and protein expression in CD3-expressing cells. | en_US |
dc.description.sponsorship | "This work is supported by the Industrial Centre for AI Research in Digital Diagnostics (iCAIRD) which is funded by Innovate UK on behalf of UK Research and Innovation (UKRI) [project number: 104690]."--Acknowledgements | en |
dc.language.iso | en | en_US |
dc.subject | Artificial intelligence | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Interpretability | en_US |
dc.subject | Explainability | en_US |
dc.subject | Histopathology | en_US |
dc.subject | Saliency mapping | en_US |
dc.subject | Segmentation | en_US |
dc.subject | Immune contexture | en_US |
dc.subject | Knowledge discovery | en_US |
dc.title | Model agnostic interpretability | en_US |
dc.type | Thesis | en_US |
dc.contributor.sponsor | Innovate UK | en_US |
dc.type.qualificationlevel | Doctoral | en_US |
dc.type.qualificationname | PhD Doctor of Philosophy | en_US |
dc.publisher.institution | The University of St Andrews | en_US |
dc.identifier.doi | https://doi.org/10.17630/sta/1084 |
This item appears in the following Collection(s)
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.