VisuaLizations As Intermediate Representations (VLAIR) : an approach for applying deep learning-based computer vision to non-image-based data
Abstract
Deep learning algorithms increasingly support automated systems in areas such as human activity recognition and purchase recommendation. We identify a current trend in which data is transformed first into abstract visualizations and then processed by a computer vision deep learning pipeline. We call this VisuaLization As Intermediate Representation (VLAIR) and believe that it can be instrumental to support accurate recognition in a number of fields while also enhancing humans’ ability to interpret deep learning models for debugging purposes or in personal use. In this paper we describe the potential advantages of this approach and explore various visualization mappings and deep learning architectures. We evaluate several VLAIR alternatives for a specific problem (human activity recognition in an apartment) and show that VLAIR attains classification accuracy above classical machine learning algorithms and several other non-image-based deep learning algorithms with several data representations.
Citation
Jiang , A , Nacenta , M A & Ye , J 2022 , ' VisuaLizations As Intermediate Representations (VLAIR) : an approach for applying deep learning-based computer vision to non-image-based data ' , Visual Informatics , vol. 6 , no. 3 , pp. 35-50 . https://doi.org/10.1016/j.visinf.2022.05.001
Publication
Visual Informatics
Status
Peer reviewed
ISSN
2468-502XType
Journal article
Rights
Copyright © 2022 The Authors. Published by Elsevier B.V. on behalf of Zhejiang University and Zhejiang University. Press Co. Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Description
Funding: We thank the China Scholarship Council (CSC) for financially supporting my PhD study at University of St Andrews, UK, and NSERC Discovery Grant 2020-04401 (Miguel Nacenta).Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.