Files in this item
Towards computer vision based ancient coin recognition in the wild — automatic reliable image preprocessing and normalization
Item metadata
dc.contributor.author | Conn, Brandon | |
dc.contributor.author | Arandelovic, Ognjen | |
dc.date.accessioned | 2017-07-13T08:30:13Z | |
dc.date.available | 2017-07-13T08:30:13Z | |
dc.date.issued | 2017-05-14 | |
dc.identifier.citation | Conn , B & Arandelovic , O 2017 , Towards computer vision based ancient coin recognition in the wild — automatic reliable image preprocessing and normalization . in 2017 International Joint Conference on Neural Networks (IJCNN) . , 7966024 , IEEE , pp. 1457-1464 , 2017 International Joint Conference on Neural Networks, IJCNN 2017 , Anchorage , Alaska , United States , 14/05/17 . https://doi.org/10.1109/IJCNN.2017.7966024 | en |
dc.identifier.citation | conference | en |
dc.identifier.isbn | 9781509061822 | |
dc.identifier.other | PURE: 250457653 | |
dc.identifier.other | PURE UUID: 75f77dfb-49f8-4ad2-bc96-a1f552c9715b | |
dc.identifier.other | WOS: 000426968701098 | |
dc.identifier.other | Scopus: 85030976613 | |
dc.identifier.other | WOS: 000426968701098 | |
dc.identifier.uri | https://hdl.handle.net/10023/11195 | |
dc.description.abstract | As an attractive area of application in the sphere of cultural heritage, in recent years automatic analysis of ancient coins has been attracting an increasing amount of research attention from the computer vision community. Recent work has demonstrated that the existing state of the art performs extremely poorly when applied on images acquired in realistic conditions. One of the reasons behind this lies in the (often implicit) assumptions made by many of the proposed algorithms — a lack of background clutter, and a uniform scale, orientation, and translation of coins across different images. These assumptions are not satisfied by default and before any further progress in the realm of more complex analysis is made, a robust method capable of preprocessing and normalizing images of coins acquired ‘in the wild’ is needed. In this paper we introduce an algorithm capable of localizing and accurately segmenting out a coin from a cluttered image acquired by an amateur collector. Specifically, we propose a two stage approach which first uses a simple shape hypothesis to localize the coin roughly and then arrives at the final, accurate result by refining this initial estimate using a statistical model learnt from large amounts of data. Our results on data collected ‘in the wild’ demonstrate excellent accuracy even when the proposed algorithm is applied on highly challenging images. | |
dc.format.extent | 8 | |
dc.language.iso | eng | |
dc.publisher | IEEE | |
dc.relation.ispartof | 2017 International Joint Conference on Neural Networks (IJCNN) | en |
dc.rights | © 2017, IEEE. This work has been made available online in accordance with the publisher’s policies. This is the author created, accepted version manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at ieeexplore.ieee.org / https://doi.org/10.1109/IJCNN.2017.7966024 | en |
dc.subject | CJ Numismatics | en |
dc.subject | QA75 Electronic computers. Computer science | en |
dc.subject | NDAS | en |
dc.subject.lcc | CJ | en |
dc.subject.lcc | QA75 | en |
dc.title | Towards computer vision based ancient coin recognition in the wild — automatic reliable image preprocessing and normalization | en |
dc.type | Conference item | en |
dc.description.version | Postprint | en |
dc.contributor.institution | University of St Andrews. School of Computer Science | en |
dc.identifier.doi | https://doi.org/10.1109/IJCNN.2017.7966024 | |
dc.identifier.url | https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=pure_st-andrews_wos_starter&SrcAuth=WosAPI&KeyUT=WOS:000426968701098&DestLinkType=FullRecord&DestApp=WOS | en |
This item appears in the following Collection(s)
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.