Understanding ancient coin images
MetadataShow full item record
Altmetrics Handle Statistics
Altmetrics DOI Statistics
In recent years, a range of problems within the broad umbrella of automatic, computer vision based analysis of ancient coins has been attracting an increasing amount of attention. Notwithstanding this research effort, the results achieved by the state of the art in the published literature remain poor and far from sufficiently well performing for any practical purpose. In the present paper we present a series of contributions which we believe will benefit the interested community. Firstly, we explain that the approach of visual matching of coins, universally adopted in all existing published papers on the topic, is not of practical interest because the number of ancient coin types exceeds by far the number of those types which have been imaged, be it in digital form (e.g. online) or otherwise (traditional film, in print, etc.). Rather, we argue that the focus should be on the understanding of the semantic content of coins. Hence, we describe a novel method which uses real-world multimodal input to extract and associate semantic concepts with the correct coin images and then using a novel convolutional neural network learn the appearance of these concepts. Empirical evidence on a real-world and by far the largest data set of ancient coins, we demonstrate highly promising results.
Cooper , J & Arandjelovic , O 2020 , Understanding ancient coin images . in L Oneto , N Navarin , A Sperduti & D Anguita (eds) , Recent Advances in Big Data and Deep Learning . Proceedings of the International Neural Networks Society , vol. 1 , Springer , Cham , pp. 330-340 , INNS Big Data and Deep Learning , Genova , Italy , 16/04/19 . https://doi.org/10.1007/978-3-030-16841-4_34conference
Recent Advances in Big Data and Deep Learning
Copyright © 2020 Springer Nature Switzerland AG. This work has been made available online in accordance with publisher policies or with permission. Permission for further reuse of this content should be sought from the publisher or the rights holder. This is the author created accepted manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1007/978-3-030-16841-4_34
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.