Interpretable feature maps for robot attention
MetadataShow full item record
Altmetrics Handle Statistics
Altmetrics DOI Statistics
Attention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture.
Terzić , K & du Buf , J M H 2017 , Interpretable feature maps for robot attention . in M Antona & C Stephanidis (eds) , Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods : 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 10277 , Springer , Cham , pp. 456-467 , 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017 , Vancouver , British Columbia , Canada , 9/07/17 . https://doi.org/10.1007/978-3-319-58706-6_37conference
Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods
© 2017, Springer. This work has been made available online in accordance with the publisher’s policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1007/978-3-319-58706-6_37
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.