Learnt quasi-transitive similarity for retrieval from large collections of faces
Date
26/06/2016Author
Metadata
Show full item recordAltmetrics Handle Statistics
Altmetrics DOI Statistics
Abstract
We are interested in identity-based retrieval of face sets from large unlabelled collections acquired in uncontrolled environments. Given a baseline algorithm for measuring the similarity of two face sets, the meta-algorithm introduced in this paper seeks to leverage the structure of the data corpus to make the best use of the available baseline. In particular, we show how partial transitivity of inter-personal similarity can be exploited to improve the retrieval of particularly challenging sets which poorly match the query under the baseline measure. We: (i) describe the use of proxy sets as a means of computing the similarity between two sets, (ii) introduce transitivity meta-features based on the similarity of salient modes of appearance variation between sets, (iii) show how quasi-transitivity can be learnt from such features without any labelling or manual intervention, and (iv) demonstrate the effectiveness of the proposed methodology through experiments on the notoriously challenging YouTube database.
Citation
Arandelovic , O 2016 , Learnt quasi-transitive similarity for retrieval from large collections of faces . in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . , 7780897 , IEEE Computer Society Conference on Computer Vision and Pattern Recognition , IEEE Computer Society , pp. 4883-4892 , IEEE Conference on Computer Vision and Pattern Recognition , Las Vegas , United States , 26/06/16 . https://doi.org/10.1109/CVPR.2016.528 conference
Publication
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
ISSN
1063-6919Type
Conference item
Rights
Copyright © 2016, IEEE. This work is made available online in accordance with the publisher’s policies. This is the author created, accepted version manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1109/CVPR.2016.528
Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.