Show simple item record

Files in this item


Item metadata

dc.contributor.authorDuncan Kerr, Alison
dc.contributor.authorScharp, Kevin A.
dc.identifier.citationDuncan Kerr , A & Scharp , K A 2022 , ' The end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligence ' , Minds and Machines , vol. First Online .
dc.identifier.otherPURE: 281088691
dc.identifier.otherPURE UUID: 36bca60b-8bed-45bc-9021-ac31affbe08c
dc.identifier.otherORCID: /0000-0003-3900-4087/work/119212777
dc.identifier.otherWOS: 000852599600001
dc.identifier.otherScopus: 85137798343
dc.description.abstractArtificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.
dc.relation.ispartofMinds and Machinesen
dc.rightsCopyright © The Author(s) 2022. Open Access article. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
dc.subjectArtificial intelligenceen
dc.subjectSurveillance capitalismen
dc.subjectExplainable artificial intelligenceen
dc.subjectMachine learningen
dc.subjectB Philosophy (General)en
dc.titleThe end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligenceen
dc.typeJournal articleen
dc.description.versionPublisher PDFen
dc.contributor.institutionUniversity of St Andrews. St Andrews Centre for Exoplanet Scienceen
dc.contributor.institutionUniversity of St Andrews. Philosophyen
dc.description.statusPeer revieweden

This item appears in the following Collection(s)

Show simple item record