Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.authorDang, Nguyen
dc.contributor.authorAkgun, Ozgur
dc.contributor.authorEspasa Arxer, Joan
dc.contributor.authorMiguel, Ian James
dc.contributor.authorNightingale, Peter
dc.contributor.editorSolon, Christine
dc.date.accessioned2022-07-28T16:30:04Z
dc.date.available2022-07-28T16:30:04Z
dc.date.issued2022-07-23
dc.identifier279532119
dc.identifierc341e68d-f4c1-4ab5-b77b-0693aa1cc90f
dc.identifier85135706603
dc.identifier.citationDang , N , Akgun , O , Espasa Arxer , J , Miguel , I J & Nightingale , P 2022 , A framework for generating informative benchmark instances . in C Solon (ed.) , 28th International Conference on Principles and Practice of Constraint Programming (CP 2022) . , 18 , Leibniz International Proceedings in Informatics (LIPIcs) , vol. 235 , Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing , Dagstuhl . https://doi.org/10.4230/LIPIcs.CP.2022.18en
dc.identifier.isbn9783959772402
dc.identifier.issn1868-8969
dc.identifier.otherORCID: /0000-0002-6930-2686/work/116597866
dc.identifier.otherORCID: /0000-0001-9519-938X/work/116598018
dc.identifier.otherORCID: /0000-0002-2693-6953/work/116598356
dc.identifier.urihttps://hdl.handle.net/10023/25744
dc.descriptionFunding: Nguyen Dang: is a Leverhulme Early Career Fellow; Ian Miguel: supported by EPSRC EP/V027182/1.en
dc.description.abstractBenchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problems that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.
dc.format.extent18
dc.format.extent1119910
dc.language.isoeng
dc.publisherSchloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
dc.relation.ispartof28th International Conference on Principles and Practice of Constraint Programming (CP 2022)en
dc.relation.ispartofseriesLeibniz International Proceedings in Informatics (LIPIcs)en
dc.subjectInstance generationen
dc.subjectBenchmarkingen
dc.subjectConstraint programmingen
dc.subjectQA75 Electronic computers. Computer scienceen
dc.subjectQA76 Computer softwareen
dc.subjectDASen
dc.subject.lccQA75en
dc.subject.lccQA76en
dc.titleA framework for generating informative benchmark instancesen
dc.typeConference itemen
dc.contributor.sponsorThe Leverhulme Trusten
dc.contributor.sponsorEPSRCen
dc.contributor.institutionUniversity of St Andrews. Centre for Interdisciplinary Research in Computational Algebraen
dc.contributor.institutionUniversity of St Andrews. School of Computer Scienceen
dc.contributor.institutionUniversity of St Andrews. Sir James Mackenzie Institute for Early Diagnosisen
dc.identifier.doi10.4230/LIPIcs.CP.2022.18
dc.identifier.grantnumberECF-2020-168en
dc.identifier.grantnumberEP/V027182/1en


This item appears in the following Collection(s)

Show simple item record