Adaptive guidance in extended reality environments
Abstract
Learning depends on the dynamics of one’s personal circumstances and immediate environment that provides hands- experience. As a result, educators are constantly striving to create personalised learning experiences for learners. The increasing use of technology in education has led to the development of various e-learning systems. However, these systems are limited by their inability to create immersive and interactive learning environments that cater to each learner’s individual needs and preferences. Extended Reality (XR) technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) offer a new way of delivering Experiential Learning (ExL) that can meet these challenges. However, existing XR-based learning systems lack the ability to adapt to learners’ individual needs and preferences, which may reduce their learning performance. Nevertheless, there is a lack of research and guidance on effectively incorporating XR technologies to design adaptive experiential learning systems. Thus, this thesis aims to contribute new knowledge on how XR technologies can be used to design and develop interactive, adaptive ExL systems that can be integrated into future learning environments. This is accomplished by (i) presenting a comprehensive design space grounded in XR technology and the theoretical underpinnings of learning and instructional guidance, and by (ii) conducting three different user studies, each focusing on an interactive experiential learning system developed based on a particular configuration of the presented design space.
In the first study, the focus is placed on how different representation methods of the future building (paper, desktop and VR HMD) would affect the user experience, dimensions of user engagement, the understanding of the space with minimum guidance, and support users to project themselves into the future office space. The second study explores how different factors of instructional guidance – i.e., the amount of guidance (fixed vs. adaptive-amount) and the type of guidance (fixed vs. adaptive-associations) – would affect the user experience, engagement and the learning outcomes of a language learning scenario. The final study further looks into detail at how different interfaces (AR vs. non-AR) and types of guidance (keyword only vs. keyword + visualisation) would affect user experience, engagement and consequently the learning performances in vocabulary learning.
The results of this research will provide insights into the design and development of interactive XR based experiential learning systems that can meet the diverse learning needs and preferences of individual learners, leading to improved learning outcomes. This work will be useful and of interest to researchers and practitioners who conduct research within the fields of Human-Computer Interaction (HCI), instructional design or education.
Type
Thesis, PhD Doctor of Philosophy
Rights
Creative Commons Attribution-ShareAlike 4.0 International
http://creativecommons.org/licenses/by-sa/4.0/
Collections
Description of related resources
Weerasinghe, A. M., Copic Pucihar, K., Ducasse, J., Quigley, A. J., Toniolo, A., Miguel, A., Caluya, N., & Kljun, M. (2022). Exploring the future building: representational effect on projecting oneself into the future office space. Virtual Reality, First Online. https://doi.org/10.1007/s10055-022-00673-z [http://hdl.handle.net/10023/25843 : Open Access version]Weerasinghe, A. M., Biener, V., Grubert, J., Quigley, A. J., Toniolo, A., Copic Pucihar , K., & Kljun, M. (2022). VocabulARy: learning vocabulary in AR supported by keyword visualizations. IEEE Transactions on Visualization and Computer Graphics, 28(11), 3748 - 3758. https://doi.org/10.1109/TVCG.2022.3203116 [http://hdl.handle.net/10023/26237 : Open Access version]
Weerasinghe, A. M., Quigley, A. J., Copic Pucihar, K., Toniolo, A., Miguel, A. R., & Kljun, M. (2022). Arigatō: effects of adaptive guidance on engagement and performances in augmented reality learning environments. IEEE Transactions on Visualization and Computer Graphics, 28(11), 3737 - 3747. https://doi.org/10.1109/tvcg.2022.3203088 [http://hdl.handle.net/10023/26235 : Open Access version]
Except where otherwise noted within the work, this item's licence for re-use is described as Creative Commons Attribution-ShareAlike 4.0 International
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.