Show simple item record

Files in this item


Item metadata

dc.contributor.authorSaleiro, Mario
dc.contributor.authorFarrajota, Miguel
dc.contributor.authorTerzić, Kasim
dc.contributor.authorKrishna, Sai
dc.contributor.authorRodrigues, João M.F.
dc.contributor.authordu Buf, J. M.Hans
dc.contributor.editorAntona, Margherita
dc.contributor.editorStephanidis, Constantine
dc.identifier.citationSaleiro , M , Farrajota , M , Terzić , K , Krishna , S , Rodrigues , J M F & du Buf , J M H 2015 , Biologically inspired vision for human-robot interaction . in M Antona & C Stephanidis (eds) , Universal Access in Human-Computer Interaction. Access to Interaction : 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 9176 , Springer , Cham , pp. 505-517 , 9th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2015 Held as Part of 17th International Conference on Human-Computer Interaction, HCI International 2015 , Los Angeles , California , United States , 2/08/15 .
dc.identifier.otherPURE: 255500590
dc.identifier.otherPURE UUID: c25dfc2b-e480-4269-9cd5-c73e23912193
dc.identifier.otherScopus: 84945911426
dc.description.abstractHuman-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.
dc.relation.ispartofUniversal Access in Human-Computer Interaction. Access to Interactionen
dc.relation.ispartofseriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)en
dc.rights© 2015, Springer International Publishing Switzerland. This work has been made available online in accordance with the publisher’s policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at
dc.subjectBiological frameworken
dc.subjectHand gesturesen
dc.subjectHuman-robot interactionen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectQH301 Biologyen
dc.subjectT Technologyen
dc.subjectComputer Science(all)en
dc.subjectTheoretical Computer Scienceen
dc.titleBiologically inspired vision for human-robot interactionen
dc.typeConference itemen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen

This item appears in the following Collection(s)

Show simple item record