Visualization as Intermediate Representations (VLAIR) for human activity recognition
MetadataShow full item record
Ambient, binary, event-driven sensor data is useful for many human activity recognition applications such as smart homes and ambient-assisted living. These sensors are privacy-preserving, unobtrusive, inexpensive and easy to deploy in scenarios that require detection of simple activities such as going to sleep, and leaving the house. However, classification performance is still a challenge, especially when multiple people share the same space or when different activities take place in the same areas. To improve classification performance we develop what we call a Visualization as Intermediate Representations (VLAIR) approach. The main idea is to re-represent the data as visualizations (generated pixel images) in a similar way as how visualizations are created for humans to analyze and communicate data. Then we can feed these images to a convolutional neural network whose strength resides in extracting effective visual features. We have tested five variants (mappings) of the VLAIR approach and compared them to a collection of classifiers commonly used in classic human activity recognition. The best of the VLAIR approaches outperforms the best baseline, with strong advantage in recognising less frequent activities and distinguishing users and activities in common areas. We conclude the paper with a discussion on why and how VLAIR can be useful in human activity recognition scenarios and beyond.
Jiang , A , Nacenta , M , Terzić , K & Ye , J 2020 , Visualization as Intermediate Representations (VLAIR) for human activity recognition . in S A Munson & S M Schueller (eds) , PervasiveHealth '20: Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare . ACM , pp. 201-210 , 14th EAI International Conference on Pervasive Computing Technologies for Healthcare (EAI PervasiveHealth 2020) , Atlanta , United States , 6/10/20 . https://doi.org/10.1145/3421937.3422015conference
Copyright © 2020 Association for Computing Machinery. This work has been made available online in accordance with publisher policies or with permission. Permission for further reuse of this content should be sought from the publisher or the rights holder. This is the author created accepted manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1145/3421937.3422015.
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.