How people visually represent discrete constraint problems
Abstract
Problems such as timetabling or personnel allocation can be modeled and solved using discrete constraint programming languages. However, while existing constraint solving software solves such problems quickly in many cases, these systems involve specialized languages that require significant time and effort to learn and apply. These languages are typically text-based and often difficult to interpret and understand quickly, especially for people without engineering or mathematics backgrounds. Visualization could provide an alternative way to model and understand such problems. Although many visual programming languages exist for procedural languages, visual encoding of problem specifications has not received much attention. Future problem visualization languages could represent problem elements and their constraints unambiguously, but without unnecessary cognitive burdens for those needing to translate their problem's mental representation into diagrams. As a first step towards such languages, we executed a study that catalogs how people represent constraint problems graphically. We studied three groups with different expertise: non-computer scientists, computer scientists and constraint programmers and analyzed their marks on paper (e.g., arrows), gestures (e.g., pointing) and the mappings to problem concepts (e.g., containers, sets). We provide foundations to guide future tool designs allowing people to effectively grasp, model and solve problems through visual representations.
Citation
Zhu , X , Nacenta , M , Akgün , Ö & Nightingale , P W 2019 , ' How people visually represent discrete constraint problems ' , IEEE Transactions on Visualization and Computer Graphics , vol. 26 , no. 8 , pp. 2603 - 2619 . https://doi.org/10.1109/TVCG.2019.2895085
Publication
IEEE Transactions on Visualization and Computer Graphics
Status
Peer reviewed
ISSN
1077-2626Type
Journal article
Rights
© 2018, IEEE. This work has been made available online in accordance with the publisher's policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1109/TVCG.2019.2895085
Description
Funding: This work is supported by EPSRC grants DTG1796157 and EP/P015638/1.Collections
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.