Speech emotion recognition with early visual cross-modal enhancement using spiking neural networks
Abstract
Speech emotion recognition (SER) is an important part of affective computing and signal processing research areas. A number of approaches, especially deep learning techniques, have achieved promising results on SER. However, there are still challenges in translating temporal and dynamic changes in emotions through speech. Spiking Neural Networks (SNN) have demonstrated as a promising approach in machine learning and pattern recognition tasks such as handwriting and facial expression recognition. In this paper, we investigate the use of SNNs for SER tasks and more importantly we propose a new cross-modal enhancement approach. This method is inspired by the auditory information processing in the brain where auditory information is preceded, enhanced and predicted by a visual processing in multisensory audio-visual processing. We have conducted experiments on two datasets to compare our approach with the state-of-the-art SER techniques in both uni-modal and multi-modal aspects. The results have demonstrated that SNNs can be an ideal candidate for modeling temporal relationships in speech features and our cross-modal approach can significantly improve the accuracy of SER.
Citation
Mansouri-Benssassi , E & Ye , J 2019 , Speech emotion recognition with early visual cross-modal enhancement using spiking neural networks . in 2019 International Joint Conference on Neural Networks, IJCNN 2019 . , 8852473 , Proceedings of the International Joint Conference on Neural Networks , vol. 2019-July , Institute of Electrical and Electronics Engineers Inc. , pp. 1-8 , 2019 International Joint Conference on Neural Networks, IJCNN 2019 , Budapest , Hungary , 14/07/19 . https://doi.org/10.1109/IJCNN.2019.8852473 conference
Publication
2019 International Joint Conference on Neural Networks, IJCNN 2019
ISSN
2161-4393Type
Conference item
Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.