Interpretable feature maps for robot attention
Date
2017Keywords
Metadata
Show full item recordAltmetrics Handle Statistics
Altmetrics DOI Statistics
Abstract
Attention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture.
Citation
Terzić , K & du Buf , J M H 2017 , Interpretable feature maps for robot attention . in M Antona & C Stephanidis (eds) , Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods : 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 10277 , Springer , Cham , pp. 456-467 , 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017 , Vancouver , British Columbia , Canada , 9/07/17 . https://doi.org/10.1007/978-3-319-58706-6_37 conference
Publication
Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods
ISSN
0302-9743Type
Conference item
Rights
© 2017, Springer. This work has been made available online in accordance with the publisher’s policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1007/978-3-319-58706-6_37
Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.