Files in this item
Big data driven detection of trees in suburban scenes using visual spectrum eye level photography
Item metadata
dc.contributor.author | Thirlwell, Andrew | |
dc.contributor.author | Arandjelović, Ognjen | |
dc.date.accessioned | 2020-06-03T14:30:04Z | |
dc.date.available | 2020-06-03T14:30:04Z | |
dc.date.issued | 2020-05-28 | |
dc.identifier.citation | Thirlwell , A & Arandjelović , O 2020 , ' Big data driven detection of trees in suburban scenes using visual spectrum eye level photography ' , Sensors , vol. 20 , no. 11 , 3051 . https://doi.org/10.3390/s20113051 | en |
dc.identifier.issn | 1424-8220 | |
dc.identifier.other | PURE: 268315882 | |
dc.identifier.other | PURE UUID: c9b2eed3-7d1e-4539-880e-2054bfc7ee26 | |
dc.identifier.other | Bibtex: s20113051 | |
dc.identifier.other | WOS: 000552737900044 | |
dc.identifier.other | Scopus: 85085678717 | |
dc.identifier.uri | https://hdl.handle.net/10023/20044 | |
dc.description.abstract | The aim of the work described in this paper is to detect trees in eye level view images. Unlike previous work that universally considers highly constrained environments, such as natural parks and wooded areas, or simple scenes with little clutter and clear tree separation, our focus is on much more challenging suburban scenes, which are rich in clutter and highly variable in type and appearance (houses, falls, shrubs, cars, bicycles, pedestrians, hydrants, lamp posts, etc.). Thus, we motivate and introduce three different approaches: (i) a conventional computer vision based approach, employing manually engineered steps and making use of explicit human knowledge of the application domain, (ii) a more machine learning oriented approach, which learns from densely extracted local features in the form of scale invariant features (SIFT), and (iii) a machine learning based approach, which employs both colour and appearance models as a means of making the most of available discriminative information. We also make a significant contribution in regards to the collection of training and evaluation data. In contrast to the existing work, which relies on manual data collection (thus risking unintended bias) or corpora constrained in variability and limited in size (thus not allowing for reliable generalisation inferences to be made), we show how large amounts of representative data can be collected automatically using freely available tools, such as Google’s Street View, and equally automatically processed to produce a large corpus of minimally biased imagery. Using a large data set collected in the manner and comprising tens of thousands of images, we confirm our theoretical arguments that motivated our machine learning based and colour-aware histograms of oriented gradients based method, which achieved a recall of 95% and precision of 97%. | |
dc.format.extent | 15 | |
dc.language.iso | eng | |
dc.relation.ispartof | Sensors | en |
dc.rights | Copyright © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). | en |
dc.subject | Computer vision | en |
dc.subject | Local features | en |
dc.subject | Machine learning | en |
dc.subject | Street view | en |
dc.subject | Tree stumps | en |
dc.subject | QA75 Electronic computers. Computer science | en |
dc.subject | T Technology | en |
dc.subject | DAS | en |
dc.subject.lcc | QA75 | en |
dc.subject.lcc | T | en |
dc.title | Big data driven detection of trees in suburban scenes using visual spectrum eye level photography | en |
dc.type | Journal article | en |
dc.description.version | Publisher PDF | en |
dc.contributor.institution | University of St Andrews. School of Computer Science | en |
dc.identifier.doi | https://doi.org/10.3390/s20113051 | |
dc.description.status | Peer reviewed | en |
This item appears in the following Collection(s)
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.