Show simple item record

Files in this item

FilesSizeFormatView

There are no files associated with this item.

Item metadata

dc.contributor.advisorOtto, Thomas
dc.contributor.authorLiu, Yue
dc.coverage.spatial191 p.en_US
dc.date.accessioned2021-07-28T14:12:26Z
dc.date.available2021-07-28T14:12:26Z
dc.date.issued2021-07-02
dc.identifier.urihttp://hdl.handle.net/10023/23662
dc.description.abstractAn overarching goal of multisensory research is to understand multisensory behaviour in a wider context but still within a unified framework. Modelling has been a useful tool to understand how information from different senses is combined to produce behaviour. For example, different models have been used to understand the improvement in speed or precision of multisensory relative to unisensory decisions (Ernst & Banks, 2002; Raab, 1962). However, these modelling approaches are limited to address a single measure of multisensory decisions, for example, models of speed cannot explain precision and models of precision cannot explain speed. The field is still working towards developing a unified modelling framework that can link the different aspects of multisensory decisions together. In aim of this, I addressed three gaps in research on multisensory decision making, which I believe is essential to address the missing linkage between studies of multisensory behaviour. Firstly, most multisensory modelling work has used an implicit assumption, which states that processing of a unisensory signal and processing of the corresponding unisensory component of a multisensory signals are identical. However, the validity of this assumption, has not been tested. I offered a way to understand this assumption under experimental settings and proved that it is most likely violated. This suggests that studies of multisensory behaviour should rather consider the influence of context in modelling work. Secondly, I addressed that multisensory response at neuronal and behavioural level can be linked. The spatial principle, which is found in neuronal studies, can be mapped to behaviour and accounted for by a race model. Thirdly, I attempt to address the link between speed and precision of multisensory behaviour. In the past, the two have rarely been studied within a unified paradigm. I found that when both measures were required, participants are far from achieving optimality in either speed or precision, and it is possible that they are switching strategies towards favouring one of the measures in such decisions. The race model is a strong candidate to be refined and developed in future research to incorporate more aspects of multisensory decisions, as I have address it is not limited to explain speedup in simple detection task, but also influence of context, space, and precision in more complex experimental tasks.en_US
dc.language.isoenen_US
dc.publisherUniversity of St Andrews
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectMultisensory decisionen_US
dc.subjectRace modelen_US
dc.subjectAudiovisualen_US
dc.subjectSpaceen_US
dc.subjectRTen_US
dc.subjectPrecisionen_US
dc.subjectMaximum likelihood estimationen_US
dc.titleUnderstanding multisensory processing in a wider context using a model-based approachen_US
dc.typeThesisen_US
dc.contributor.sponsorUniversity of St Andrewsen_US
dc.type.qualificationlevelDoctoralen_US
dc.type.qualificationnamePhD Doctor of Philosophyen_US
dc.publisher.institutionThe University of St Andrewsen_US
dc.rights.embargodate2022-06-11
dc.rights.embargoreasonThesis restricted in accordance with University regulations. Electronic copy restricted until 11th June 2022en
dc.identifier.doihttps://doi.org/10.17630/sta/117


The following license files are associated with this item:

    This item appears in the following Collection(s)

    Show simple item record

    Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
    Except where otherwise noted within the work, this item's license for re-use is described as Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International