VocabulARy : learning vocabulary in AR supported by keyword visualizations
MetadataShow full item record
Learning vocabulary in a primary or secondary language is enhanced when we encounter words in context. This context can be afforded by the place or activity we are engaged with. Existing learning environments include formal learning, mnemonics, flashcards, use of a dictionary or thesaurus, all leading to practice with new words in context. In this work, we propose an enhancement to the language learning process by providing the user with words and learning tools in context, with VocabulARy. VocabulARy visually annotates objects in AR, in the user's surroundings, with the corresponding English (first language) and Japanese (second language) words to enhance the language learning process. In addition to the written and audio description of each word, we also present the user with a keyword and its visualisation to enhance memory retention. We evaluate our prototype by comparing it to an alternate AR system that does not show an additional visualisation of the keyword, and, also, we compare it to two non-AR systems on a tablet, one with and one without visualising the keyword. Our results indicate that AR outperforms the tablet system regarding immediate recall, mental effort and task-completion time. Additionally, the visualisation approach scored significantly higher than showing only the written keyword with respect to immediate and delayed recall and learning efficiency, mental effort and task-completion time.
Weerasinghe , A M , Biener , V , Grubert , J , Quigley , A J , Toniolo , A , Copic Pucihar , K & Kljun , M 2022 , ' VocabulARy : learning vocabulary in AR supported by keyword visualizations ' , IEEE Transactions on Visualization and Computer Graphics , vol. 28 , no. 11 , pp. 3748 - 3758 . https://doi.org/10.1109/TVCG.2022.3203116
IEEE Transactions on Visualization and Computer Graphics
Copyright © 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. This work has been made available online in accordance with publisher policies or with permission. Permission for further reuse of this content should be sought from the publisher or the rights holder. This is the author created accepted manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1109/TVCG.2022.3203116.
DescriptionFunding: This research was supported by European Commission through the InnoRenew CoE project (Grant Agreement 739574) under the Horizon2020 Widespread-Teaming program and the Republic of Slovenia (investment funding of the Republic of Slovenia and the European Union of the European Regional Development Fund). We also acknowledge support from the Slovenian research agency ARRS (program no. BI-DE/20-21-002, P1-0383, J1-9186, J1-1715, J5-1796, and J1-1692).
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.