How people visually represent discrete constraint problems
MetadataShow full item record
Problems such as timetabling or personnel allocation can be modeled and solved using discrete constraint programming languages. However, while existing constraint solving software solves such problems quickly in many cases, these systems involve specialized languages that require significant time and effort to learn and apply. These languages are typically text-based and often difficult to interpret and understand quickly, especially for people without engineering or mathematics backgrounds. Visualization could provide an alternative way to model and understand such problems. Although many visual programming languages exist for procedural languages, visual encoding of problem specifications has not received much attention. Future problem visualization languages could represent problem elements and their constraints unambiguously, but without unnecessary cognitive burdens for those needing to translate their problem's mental representation into diagrams. As a first step towards such languages, we executed a study that catalogs how people represent constraint problems graphically. We studied three groups with different expertise: non-computer scientists, computer scientists and constraint programmers and analyzed their marks on paper (e.g., arrows), gestures (e.g., pointing) and the mappings to problem concepts (e.g., containers, sets). We provide foundations to guide future tool designs allowing people to effectively grasp, model and solve problems through visual representations.
Zhu , X , Nacenta , M , Akgün , Ö & Nightingale , P W 2019 , ' How people visually represent discrete constraint problems ' IEEE Transactions on Visualization and Computer Graphics , vol. Early Access . https://doi.org/10.1109/TVCG.2019.2895085
IEEE Transactions on Visualization and Computer Graphics
© 2018, IEEE. This work has been made available online in accordance with the publisher's policies. This is the author created accepted version manuscript following peer review and as such may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1109/TVCG.2019.2895085
DescriptionFunding: This work is supported by EPSRC grants DTG1796157 and EP/P015638/1.
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.