Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorDuncan Kerr, Alison
dc.contributor.authorScharp, Kevin A.
dc.date.accessioned2022-09-13T11:30:14Z
dc.date.available2022-09-13T11:30:14Z
dc.date.issued2022-09-11
dc.identifier281088691
dc.identifier36bca60b-8bed-45bc-9021-ac31affbe08c
dc.identifier000852599600001
dc.identifier85137798343
dc.identifier.citationDuncan Kerr , A & Scharp , K A 2022 , ' The end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligence ' , Minds and Machines , vol. First Online . https://doi.org/10.1007/s11023-022-09609-7en
dc.identifier.issn0924-6495
dc.identifier.otherORCID: /0000-0003-3900-4087/work/119212777
dc.identifier.urihttps://hdl.handle.net/10023/26000
dc.description.abstractArtificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.
dc.format.extent27
dc.format.extent755382
dc.language.isoeng
dc.relation.ispartofMinds and Machinesen
dc.subjectVaguenessen
dc.subjectArtificial intelligenceen
dc.subjectSurveillance capitalismen
dc.subjectEpistemicismen
dc.subjectExplainable artificial intelligenceen
dc.subjectMachine learningen
dc.subjectB Philosophy (General)en
dc.subjectT-NDASen
dc.subject.lccB1en
dc.titleThe end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligenceen
dc.typeJournal articleen
dc.contributor.institutionUniversity of St Andrews. St Andrews Centre for Exoplanet Scienceen
dc.contributor.institutionUniversity of St Andrews. Philosophyen
dc.identifier.doihttps://doi.org/10.1007/s11023-022-09609-7
dc.description.statusPeer revieweden


This item appears in the following Collection(s)

Show simple item record