Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorYe, Juan
dc.contributor.authorStevenson, Graeme Turnbull
dc.contributor.authorDobson, Simon Andrew
dc.date.accessioned2015-02-22T00:01:55Z
dc.date.available2015-02-22T00:01:55Z
dc.date.issued2015-05
dc.identifier.citationYe , J , Stevenson , G T & Dobson , S A 2015 , ' KCAR : a knowledge-driven approach for concurrent activity recognition ' , Pervasive and Mobile Computing , vol. 19 , pp. 47-70 . https://doi.org/10.1016/j.pmcj.2014.02.003en
dc.identifier.issn1574-1192
dc.identifier.otherPURE: 155034600
dc.identifier.otherPURE UUID: 81662b80-5550-46bc-ba3e-912bdf94d215
dc.identifier.otherScopus: 84929048258
dc.identifier.otherWOS: 000353830400004
dc.identifier.otherORCID: /0000-0002-2838-6836/work/68280960
dc.identifier.otherORCID: /0000-0001-9633-2103/work/70234161
dc.identifier.urihttp://hdl.handle.net/10023/6132
dc.description.abstractRecognising human activities from sensors embedded in an environment or worn on bodies is an important and challenging research topic in pervasive computing. Existing work on activity recognition is mainly concerned with identifying single user sequential activities from well-scripted or pre-segmented sequences of sensor events. However a real-world environment often contains multiple users, with each performing activities simultaneously, in their own way and with no explicit instructions to follow. Recognising multi-user concurrent activities is challenging, but essential for designing applications for real environments. This paper presents a novel Knowledge-driven approach for Concurrent Activity Recognition (KCAR). Within KCAR, we explore the semantics underlying each sensor event and use semantic dissimilarity to segment a continuous sensor sequence into fragments, each of which corresponds to one ongoing activity. We exploit the Pyramid Match Kernel, with a strength in approximate matching on hierarchical concepts, to recognise activities of varying grained constraints from a potentially noisy sensor sequence. We conduct an empirical evaluation on a large-scale real-world data set that is collected over one year and consists of 2.8 millions of sensor events. Our results demonstrate that KCAR achieves an average recognition accuracy of 91%.
dc.format.extent24
dc.language.isoeng
dc.relation.ispartofPervasive and Mobile Computingen
dc.rights© 2014. Elsevier B.V. All rights reserved. This is the author’s version of a work that was accepted for publication in Pervasive and Mobile Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pervasive and Mobile Computing, DOI htp://dx.doi.org/10.1016/j.pmcj.2014.02.003en
dc.subjectOntologiesen
dc.subjectSmart homeen
dc.subjectConcurrent activity recognitionen
dc.subjectSemanticsen
dc.subjectDomain knowledgeen
dc.subjectPyramid match kernelen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectNDASen
dc.subjectBDCen
dc.subjectR2Cen
dc.subject.lccQA75en
dc.titleKCAR : a knowledge-driven approach for concurrent activity recognitionen
dc.typeJournal articleen
dc.description.versionPostprinten
dc.contributor.institutionUniversity of St Andrews.School of Computer Scienceen
dc.identifier.doihttps://doi.org/10.1016/j.pmcj.2014.02.003
dc.description.statusPeer revieweden
dc.date.embargoedUntil2015-02-22


This item appears in the following Collection(s)

Show simple item record