Speech emotion recognition with early visual cross-modal enhancement using spiking neural networks
MetadataShow full item record
Altmetrics Handle Statistics
Altmetrics DOI Statistics
Speech emotion recognition (SER) is an important part of affective computing and signal processing research areas. A number of approaches, especially deep learning techniques, have achieved promising results on SER. However, there are still challenges in translating temporal and dynamic changes in emotions through speech. Spiking Neural Networks (SNN) have demonstrated as a promising approach in machine learning and pattern recognition tasks such as handwriting and facial expression recognition. In this paper, we investigate the use of SNNs for SER tasks and more importantly we propose a new cross-modal enhancement approach. This method is inspired by the auditory information processing in the brain where auditory information is preceded, enhanced and predicted by a visual processing in multisensory audio-visual processing. We have conducted experiments on two datasets to compare our approach with the state-of-the-art SER techniques in both uni-modal and multi-modal aspects. The results have demonstrated that SNNs can be an ideal candidate for modeling temporal relationships in speech features and our cross-modal approach can significantly improve the accuracy of SER.
Mansouri-Benssassi , E & Ye , J 2019 , Speech emotion recognition with early visual cross-modal enhancement using spiking neural networks . in 2019 International Joint Conference on Neural Networks, IJCNN 2019 . , 8852473 , Proceedings of the International Joint Conference on Neural Networks , vol. 2019-July , Institute of Electrical and Electronics Engineers Inc. , pp. 1-8 , 2019 International Joint Conference on Neural Networks, IJCNN 2019 , Budapest , Hungary , 14/07/19 . https://doi.org/10.1109/IJCNN.2019.8852473conference
2019 International Joint Conference on Neural Networks, IJCNN 2019
Copyright © 2019 IEEE. This work has been made available online in accordance with publisher policies or with permission. Permission for further reuse of this content should be sought from the publisher or the rights holder. This is the author created accepted manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1109/IJCNN.2019.8852473
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.