Files in this item
Interpretable feature maps for robot attention
Item metadata
dc.contributor.author | Terzić, Kasim | |
dc.contributor.author | du Buf, J. M.H. | |
dc.contributor.editor | Antona, Margherita | |
dc.contributor.editor | Stephanidis, Constantine | |
dc.date.accessioned | 2018-09-03T15:30:05Z | |
dc.date.available | 2018-09-03T15:30:05Z | |
dc.date.issued | 2017 | |
dc.identifier.citation | Terzić , K & du Buf , J M H 2017 , Interpretable feature maps for robot attention . in M Antona & C Stephanidis (eds) , Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods : 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 10277 , Springer , Cham , pp. 456-467 , 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017 , Vancouver , British Columbia , Canada , 9/07/17 . https://doi.org/10.1007/978-3-319-58706-6_37 | en |
dc.identifier.citation | conference | en |
dc.identifier.isbn | 9783319587059 | |
dc.identifier.isbn | 9783319587066 | |
dc.identifier.issn | 0302-9743 | |
dc.identifier.other | PURE: 255500481 | |
dc.identifier.other | PURE UUID: 725a4c9a-2ad1-4917-80a4-9232d5ff1ccc | |
dc.identifier.other | Scopus: 85025168961 | |
dc.identifier.other | WOS: 000456925000037 | |
dc.identifier.uri | https://hdl.handle.net/10023/15948 | |
dc.description.abstract | Attention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture. | |
dc.format.extent | 12 | |
dc.language.iso | eng | |
dc.publisher | Springer | |
dc.relation.ispartof | Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods | en |
dc.relation.ispartofseries | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | en |
dc.rights | © 2017, Springer. This work has been made available online in accordance with the publisher’s policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1007/978-3-319-58706-6_37 | en |
dc.subject | QA75 Electronic computers. Computer science | en |
dc.subject | T Technology | en |
dc.subject | Computer Science(all) | en |
dc.subject | Theoretical Computer Science | en |
dc.subject | NDAS | en |
dc.subject.lcc | QA75 | en |
dc.subject.lcc | T | en |
dc.title | Interpretable feature maps for robot attention | en |
dc.type | Conference item | en |
dc.description.version | Postprint | en |
dc.contributor.institution | University of St Andrews. School of Computer Science | en |
dc.identifier.doi | https://doi.org/10.1007/978-3-319-58706-6_37 |
This item appears in the following Collection(s)
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.