Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorTerzić, Kasim
dc.contributor.authordu Buf, J. M.H.
dc.contributor.editorAntona, Margherita
dc.contributor.editorStephanidis, Constantine
dc.date.accessioned2018-09-03T15:30:05Z
dc.date.available2018-09-03T15:30:05Z
dc.date.issued2017
dc.identifier.citationTerzić , K & du Buf , J M H 2017 , Interpretable feature maps for robot attention . in M Antona & C Stephanidis (eds) , Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods : 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 10277 , Springer , Cham , pp. 456-467 , 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017 , Vancouver , British Columbia , Canada , 9/07/17 . https://doi.org/10.1007/978-3-319-58706-6_37en
dc.identifier.citationconferenceen
dc.identifier.isbn9783319587059
dc.identifier.isbn9783319587066
dc.identifier.issn0302-9743
dc.identifier.otherPURE: 255500481
dc.identifier.otherPURE UUID: 725a4c9a-2ad1-4917-80a4-9232d5ff1ccc
dc.identifier.otherScopus: 85025168961
dc.identifier.otherWOS: 000456925000037
dc.identifier.urihttps://hdl.handle.net/10023/15948
dc.description.abstractAttention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture.
dc.format.extent12
dc.language.isoeng
dc.publisherSpringer
dc.relation.ispartofUniversal Access in Human–Computer Interaction. Design and Development Approaches and Methodsen
dc.relation.ispartofseriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)en
dc.rights© 2017, Springer. This work has been made available online in accordance with the publisher’s policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1007/978-3-319-58706-6_37en
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectT Technologyen
dc.subjectComputer Science(all)en
dc.subjectTheoretical Computer Scienceen
dc.subjectNDASen
dc.subject.lccQA75en
dc.subject.lccTen
dc.titleInterpretable feature maps for robot attentionen
dc.typeConference itemen
dc.description.versionPostprinten
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.identifier.doihttps://doi.org/10.1007/978-3-319-58706-6_37


This item appears in the following Collection(s)

Show simple item record