Ancient Roman coin recognition in the wild using deep learning based recognition of artistically depicted face profiles
MetadataShow full item record
As a particularly interesting application in the realm of cultural heritage on the one hand, and a technically challenging problem, computer vision based analysis of Roman Imperial coins has been attracting an increasing amount of research. In this paper we make several important contributions. Firstly, we address a key limitation of existing work which is largely characterized by the application of generic object recognition techniques and the lack of use of domain knowledge. In contrast, our work approaches coin recognition in much the same way as a human expert would: by identifying the emperor universally shown on the obverse.To this end we develop a deep convolutional network, carefully crafted for what is effectively a specific instance of profile face recognition. No less importantly, we also address a major methodological flaw of previous research which is, as we explain in detail, insufficiently systematic and rigorous,and mired with confounding factors. Lastly, we introduce three carefully collected and annotated data sets, and using these demonstrate the effectiveness of the proposed approach which is shown to exceed the performance of the state of the art by approximately an order of magnitude.
Schlag , I & Arandelovic , O 2017 , Ancient Roman coin recognition in the wild using deep learning based recognition of artistically depicted face profiles . in 2017 IEEE International Conference on Computer Vision Workshop (ICCVW) . , 8265553 , IEEE , pp. 2898-2906 , 2nd ICCV Workshop on e-Heritage , Venice , Italy , 29 October . DOI: 10.1109/ICCVW.2017.342workshop
2017 IEEE International Conference on Computer Vision Workshop (ICCVW)
© 2017, IEEE. This work has been made available online in accordance with the publisher’s policies. This is the author created, accepted version manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at 10.1109/ICCVW.2017.342
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.