Depth perception in disparity-defined objects : finding the balance between averaging and segregation
Abstract
Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth obtained via binocular disparity (the differences between two eyes’ views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators.
Citation
Cammack , P P K & Harris , J 2016 , ' Depth perception in disparity-defined objects : finding the balance between averaging and segregation ' , Philosophical Transactions of the Royal Society. B, Biological Sciences , vol. 371 , no. 1697 , 20150258 . https://doi.org/10.1098/rstb.2015.0258
Publication
Philosophical Transactions of the Royal Society. B, Biological Sciences
Status
Peer reviewed
ISSN
0962-8436Type
Journal article
Rights
Copyright 2016 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
Description
The work was funded by BBSRC grant BB/J000272/1 and EPSRC grant EP/L505079/1.Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.