Few-shot linguistic grounding of visual attributes and relations using gaussian kernels
MetadataShow full item record
Understanding complex visual scenes is one of fundamental problems in computer vision, but learning in this domain is challenging due to the inherent richness of the visual world and the vast number of possible scene configurations. Current state of the art approaches to scene understanding often employ deep networks which require large and densely annotated datasets. This goes against the seemingly intuitive learning abilities of humans and our ability to generalise from few examples to unseen situations. In this paper, we propose a unified framework for learning visual representation of words denoting attributes such as “blue” and relations such as “left of” based on Gaussian models operating in a simple, unified feature space. The strength of our model is that it only requires a small number of weak annotations and is able to generalize easily to unseen situations such as recognizing object relations in unusual configurations. We demonstrate the effectiveness of our model on the pr edicate detection task. Our model is able to outperform the state of the art on this task in both the normal and zero-shot scenarios, while training on a dataset an order of magnitude smaller. (Less)
Koudouna , D & Terzić , K 2021 , Few-shot linguistic grounding of visual attributes and relations using gaussian kernels . in G M Farinella , P Radeva , J Braz & K Bouatouch (eds) , Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - (Volume 5) . vol. 5 VISAPP , SCITEPRESS - Science and Technology Publications , pp. 146-156 , 16th International Conference on Computer Vision Theory and Applications (VISAPP 2021) , 8/02/21 . https://doi.org/10.5220/0010261301460156conference
Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - (Volume 5)
Copyrightc©2021 by SCITEPRESS – Science and Technology Publications, Lda. This is an open access article under the CC BY-NC-ND license.
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.