A dynamic model of trust in dialogues
MetadataShow full item record
In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.
Ogunniye , G , Toniolo , A & Oren , N 2018 , A dynamic model of trust in dialogues . in E Black , S Modgil & N Olsen (eds) , Theory and Applications of Formal Argumentation : 4th International Workshop, TAFA 2017, Melbourne, VIC, Australia, August 19-20, 2017, Revised Selected Papers . vol. Cham , Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , vol. 10757 LNAI , Springer , pp. 211-226 , 4th International Workshop on Theory and Applications of Formal Argumentation, TAFA 2017 , Melbourne , Australia , 19/08/17 . https://doi.org/10.1007/978-3-319-75553-3_15conference
Theory and Applications of Formal Argumentation
© Springer International Publishing AG, part of Springer Nature 2018. This work has been made available online in accordance with the publisher’s policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1007/978-3-319-75553-3_15
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.