Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorJiang, Ai
dc.contributor.authorNacenta, Miguel
dc.contributor.authorTerzić, Kasim
dc.contributor.authorYe, Juan
dc.contributor.editorMunson, Sean A.
dc.contributor.editorSchueller, Stephen M.
dc.date.accessioned2021-03-03T16:30:14Z
dc.date.available2021-03-03T16:30:14Z
dc.date.issued2020-05-18
dc.identifier267071323
dc.identifier87933f86-3c09-40f5-82e5-0a886f97b33b
dc.identifier85100742565
dc.identifier.citationJiang , A , Nacenta , M , Terzić , K & Ye , J 2020 , Visualization as Intermediate Representations (VLAIR) for human activity recognition . in S A Munson & S M Schueller (eds) , PervasiveHealth '20: Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare . ACM , pp. 201-210 , 14th EAI International Conference on Pervasive Computing Technologies for Healthcare (EAI PervasiveHealth 2020) , Atlanta , Georgia , United States , 6/10/20 . https://doi.org/10.1145/3421937.3422015en
dc.identifier.citationconferenceen
dc.identifier.isbn9781450375320
dc.identifier.otherORCID: /0000-0002-9864-9654/work/90112146
dc.identifier.otherORCID: /0000-0002-2838-6836/work/90112778
dc.identifier.urihttps://hdl.handle.net/10023/21551
dc.description.abstractAmbient, binary, event-driven sensor data is useful for many human activity recognition applications such as smart homes and ambient-assisted living. These sensors are privacy-preserving, unobtrusive, inexpensive and easy to deploy in scenarios that require detection of simple activities such as going to sleep, and leaving the house. However, classification performance is still a challenge, especially when multiple people share the same space or when different activities take place in the same areas. To improve classification performance we develop what we call a Visualization as Intermediate Representations (VLAIR) approach. The main idea is to re-represent the data as visualizations (generated pixel images) in a similar way as how visualizations are created for humans to analyze and communicate data. Then we can feed these images to a convolutional neural network whose strength resides in extracting effective visual features. We have tested five variants (mappings) of the VLAIR approach and compared them to a collection of classifiers commonly used in classic human activity recognition. The best of the VLAIR approaches outperforms the best baseline, with strong advantage in recognising less frequent activities and distinguishing users and activities in common areas. We conclude the paper with a discussion on why and how VLAIR can be useful in human activity recognition scenarios and beyond.
dc.format.extent10
dc.format.extent1802390
dc.language.isoeng
dc.publisherACM
dc.relation.ispartofPervasiveHealth '20:en
dc.subjectInformation visualizationen
dc.subjectIntermediate representationsen
dc.subjectHuman activity recognitionen
dc.subjectConvolutional neural networksen
dc.subjectSmart homesen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectT Technologyen
dc.subjectNDASen
dc.subjectBDUen
dc.subject.lccQA75en
dc.subject.lccTen
dc.titleVisualization as Intermediate Representations (VLAIR) for human activity recognitionen
dc.typeConference itemen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.contributor.institutionUniversity of St Andrews. Coastal Resources Management Groupen
dc.identifier.doi10.1145/3421937.3422015
dc.date.embargoedUntil2020-05-18


This item appears in the following Collection(s)

Show simple item record