Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorMansouri Benssassi, Esma
dc.contributor.authorYe, Juan
dc.date.accessioned2021-09-24T15:30:08Z
dc.date.available2021-09-24T15:30:08Z
dc.date.issued2021-08-19
dc.identifier275505623
dc.identifier6077ae2a-456e-43c8-aa10-ea550f1f0144
dc.identifier85113329778
dc.identifier.citationMansouri Benssassi , E & Ye , J 2021 , ' Investigating multisensory integration in emotion recognition through bio-inspired computational models ' , IEEE Transactions on Affective Computing , vol. Early Access . https://doi.org/10.1109/TAFFC.2021.3106254en
dc.identifier.issn1949-3045
dc.identifier.otherORCID: /0000-0002-2838-6836/work/100549554
dc.identifier.urihttps://hdl.handle.net/10023/24022
dc.description.abstractEmotion understanding represents a core aspect of human communication. Our social behaviours are closely linked to expressing our emotions and understanding others emotional and mental states through social signals. The majority of the existing work proceeds by extracting meaningful features from each modality and applying fusion techniques either at a feature level or decision level. However, these techniques are incapable of translating the constant talk and feedback between different modalities. Such constant talk is particularly important in continuous emotion recognition, where one modality can predict, enhance and complement the other. This paper proposes three multisensory integration models, based on different pathways of multisensory integration in the brain; that is, integration by convergence, early cross-modal enhancement, and integration through neural synchrony. The proposed models are designed and implemented using third-generation neural networks, Spiking Neural Networks (SNN). The models are evaluated using widely adopted, third-party datasets and compared to state-of-the-art multimodal fusion techniques, such as early, late and deep learning fusion. Evaluation results show that the three proposed models have achieved comparable results to the state-of-the-art supervised learning techniques. More importantly, this paper demonstrates plausible ways to translate constant talk between modalities during the training phase, which also brings advantages in generalisation and robustness to noise.
dc.format.extent13
dc.format.extent2576405
dc.language.isoeng
dc.relation.ispartofIEEE Transactions on Affective Computingen
dc.subjectSpiking neural networken
dc.subjectMultisensory integrationen
dc.subjectEmotion recognitionen
dc.subjectNeural synchronyen
dc.subjectGraph neural networken
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectQH301 Biologyen
dc.subject3rd-DASen
dc.subject.lccQA75en
dc.subject.lccQH301en
dc.titleInvestigating multisensory integration in emotion recognition through bio-inspired computational modelsen
dc.typeJournal articleen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.identifier.doi10.1109/TAFFC.2021.3106254
dc.description.statusPeer revieweden


This item appears in the following Collection(s)

Show simple item record