Computer Science Research
https://hdl.handle.net/10023/59
2024-03-29T00:26:49Z
2024-03-29T00:26:49Z
Simplified cloud instance selection
Boonprasop, Chalee
Barker, Adam David
https://hdl.handle.net/10023/17165
2023-04-19T00:43:40Z
2018-04-23T00:00:00Z
Cloud computing delivers computational services to anyone over the internet. The cloud providers offer these services through a simplified billing model where customers can rent services based on the types of computing power they require. However, given the vast choice, it is complicated for a user to select the optimal instance types for a given workload or application. In this paper, we propose a user-friendly cloud instance recommendation system, which given a set of weighted coefficients representing the relevance of CPU, memory, storage and network along with a price, will recommend the best performing instances. The system only requires provider specified data about instance types and doesn’t require costly cloud benchmarking. We evaluate our approach on Microsoft Azure across a number of different common workload types.
2018-04-23T00:00:00Z
Boonprasop, Chalee
Barker, Adam David
Cloud computing delivers computational services to anyone over the internet. The cloud providers offer these services through a simplified billing model where customers can rent services based on the types of computing power they require. However, given the vast choice, it is complicated for a user to select the optimal instance types for a given workload or application. In this paper, we propose a user-friendly cloud instance recommendation system, which given a set of weighted coefficients representing the relevance of CPU, memory, storage and network along with a price, will recommend the best performing instances. The system only requires provider specified data about instance types and doesn’t require costly cloud benchmarking. We evaluate our approach on Microsoft Azure across a number of different common workload types.
Supervisor recommendation tool for Computer Science projects
Zemaityte, Gintare
Terzic, Kasim
https://hdl.handle.net/10023/16935
2024-02-27T00:38:41Z
2019-01-09T00:00:00Z
In most Computer Science programmes, students are required to undertake an individual project under the guidance of a supervisor during their studies. With increasing student numbers, matching students to suitable supervisors is becoming an increasing challenge. This paper presents a software tool which assists Computer Science students in identifying the most suitable supervisor for their final year project. It does this by matching a list of keywords or a project proposal provided by the students to a list of keywords which were automatically extracted from freely available data for each potential supervisor. The tool was evaluated using both manual and user testing, with generally positive results and user feedback. 83% of respondents agree that the current implementation of the tool is accurate, with 67% saying it would be a useful tool to have when looking for a supervisor. The tool is currently being adapted for wider use in the School.
2019-01-09T00:00:00Z
Zemaityte, Gintare
Terzic, Kasim
In most Computer Science programmes, students are required to undertake an individual project under the guidance of a supervisor during their studies. With increasing student numbers, matching students to suitable supervisors is becoming an increasing challenge. This paper presents a software tool which assists Computer Science students in identifying the most suitable supervisor for their final year project. It does this by matching a list of keywords or a project proposal provided by the students to a list of keywords which were automatically extracted from freely available data for each potential supervisor. The tool was evaluated using both manual and user testing, with generally positive results and user feedback. 83% of respondents agree that the current implementation of the tool is accurate, with 67% saying it would be a useful tool to have when looking for a supervisor. The tool is currently being adapted for wider use in the School.
Proof-carrying plans
Schwaab, Christopher Joseph
Komendantskaya, Ekaterina
Hill, Alisdair
Farka, František
Petrick, Ronald
Wells, Joe
Hammond, Kevin
https://hdl.handle.net/10023/16855
2023-04-19T00:43:25Z
2019-01-01T00:00:00Z
It is becoming increasingly important to verify safety and security of AI applications. While declarative languages (of the kind found in automated planners and model checkers) are traditionally used for verifying AI systems, a big challenge is to design methods that generate verified executable programs. A good example of such a “verification to implementation” cycle is given by automated planning languages like PDDL, where plans are found via a model search in a declarative language, but then interpreted or compiled into executable code in an imperative language. In this paper, we show that this method can itself be verified. We present a formal framework and a prototype Agda implementation that represent PDDL plans as executable functions that inhabit types that are given by formulae describing planning problems. By exploiting the well-known Curry-Howard correspondence, type-checking then automatically ensures that the generated program corresponds precisely to the specification of the planning problem.
2019-01-01T00:00:00Z
Schwaab, Christopher Joseph
Komendantskaya, Ekaterina
Hill, Alisdair
Farka, František
Petrick, Ronald
Wells, Joe
Hammond, Kevin
It is becoming increasingly important to verify safety and security of AI applications. While declarative languages (of the kind found in automated planners and model checkers) are traditionally used for verifying AI systems, a big challenge is to design methods that generate verified executable programs. A good example of such a “verification to implementation” cycle is given by automated planning languages like PDDL, where plans are found via a model search in a declarative language, but then interpreted or compiled into executable code in an imperative language. In this paper, we show that this method can itself be verified. We present a formal framework and a prototype Agda implementation that represent PDDL plans as executable functions that inhabit types that are given by formulae describing planning problems. By exploiting the well-known Curry-Howard correspondence, type-checking then automatically ensures that the generated program corresponds precisely to the specification of the planning problem.
Employing domain specific discriminative information to address inherent limitations of the LBP descriptor in face recognition
Fan, Junjie
Arandjelovic, Ognjen
https://hdl.handle.net/10023/16799
2023-04-19T00:43:28Z
2018-10-15T00:00:00Z
The local binary patern (LBP) descriptor and its derivatives have a demonstrated track record of good performance in face recognition. Nevertheless the original descriptor, the framework within which it is employed, and the aforementioned improvements of these in the existing literature, all suffer from a number of inherent limitations. In this work we highlight these and propose novel ways of addressing them in a principled fashion. Specifically, we introduce (i) gradient based weighting of local descriptor contributions to region based histograms as a means of avoiding data smoothing by non-discriminative image loci, and (ii) Gaussian fuzzy region membership as a means of achieving robustness to registration errors. Importantly, the nature of these contributions allows the proposed techniques to be combined with the existing extensions to the LBP descriptor thus making them universally recommendable. Effectiveness is demonstrated on the notoriously challenging Extended Yale B face corpus.
2018-10-15T00:00:00Z
Fan, Junjie
Arandjelovic, Ognjen
The local binary patern (LBP) descriptor and its derivatives have a demonstrated track record of good performance in face recognition. Nevertheless the original descriptor, the framework within which it is employed, and the aforementioned improvements of these in the existing literature, all suffer from a number of inherent limitations. In this work we highlight these and propose novel ways of addressing them in a principled fashion. Specifically, we introduce (i) gradient based weighting of local descriptor contributions to region based histograms as a means of avoiding data smoothing by non-discriminative image loci, and (ii) Gaussian fuzzy region membership as a means of achieving robustness to registration errors. Importantly, the nature of these contributions allows the proposed techniques to be combined with the existing extensions to the LBP descriptor thus making them universally recommendable. Effectiveness is demonstrated on the notoriously challenging Extended Yale B face corpus.
Enabling single-handed interaction in mobile and wearable computing
Yeo, Hui Shyong
https://hdl.handle.net/10023/16707
2023-04-19T00:43:29Z
2018-10-11T00:00:00Z
Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user’s other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.
2018-10-11T00:00:00Z
Yeo, Hui Shyong
Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user’s other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.
Diversity computing
Fletcher-Wastson, Sue
De Jaegher, Hanne
van Dijk, Jelle
Frauenberger, Christopher
Magnee, Maurice
Ye, Juan
https://hdl.handle.net/10023/16586
2022-12-02T11:30:03Z
2018-08-22T00:00:00Z
2018-08-22T00:00:00Z
Fletcher-Wastson, Sue
De Jaegher, Hanne
van Dijk, Jelle
Frauenberger, Christopher
Magnee, Maurice
Ye, Juan
Teaching data ethics : "We're going to ethics the heck out of this"
Henderson, Tristan
https://hdl.handle.net/10023/16570
2023-04-19T00:43:23Z
2019-01-09T00:00:00Z
This paper outlines a new Data Ethics & Privacy module that was introduced to computer science students in 2018. The module aims to raise student awareness of current debates in computer science such as bias in artificial intelligence, algorithmic accountability, filter bubbles and data protection, and practical mechanisms for addressing these issues. To do this, the module includes interdisciplinary content from ethics, law and computer science, and also adopts some teaching methods from the law. I describe the format of the module, challenges with module design and approval, some initial comments on the first year’s cohort, and plans for future improvements. I believe that the topic is currently important and this discussion might be of interest to other computer science departments considering the introduction of similar content.
2019-01-09T00:00:00Z
Henderson, Tristan
This paper outlines a new Data Ethics & Privacy module that was introduced to computer science students in 2018. The module aims to raise student awareness of current debates in computer science such as bias in artificial intelligence, algorithmic accountability, filter bubbles and data protection, and practical mechanisms for addressing these issues. To do this, the module includes interdisciplinary content from ethics, law and computer science, and also adopts some teaching methods from the law. I describe the format of the module, challenges with module design and approval, some initial comments on the first year’s cohort, and plans for future improvements. I believe that the topic is currently important and this discussion might be of interest to other computer science departments considering the introduction of similar content.
AIF-EL - an OWL2-EL-compliant AIF ontology
Cerutti, Federico
Toniolo, Alice
Norman, Timothy J.
Bex, Floris
Rahwan, Iyad
Reed, Chris
https://hdl.handle.net/10023/16191
2022-04-07T15:30:12Z
2018-09-11T00:00:00Z
This paper briefly describes AIF-EL, an OWL2-EL compliant ontology for the Argument Interchange Format.
2018-09-11T00:00:00Z
Cerutti, Federico
Toniolo, Alice
Norman, Timothy J.
Bex, Floris
Rahwan, Iyad
Reed, Chris
This paper briefly describes AIF-EL, an OWL2-EL compliant ontology for the Argument Interchange Format.
CISpaces.org : from fact extraction to report generation
Cerutti, Federico
Norman, Timothy J.
Toniolo, Alice
Middleton, Stuart E.
https://hdl.handle.net/10023/16190
2022-04-14T20:30:55Z
2018-09-11T00:00:00Z
We introduce CISpaces.org, a tool to support situational understanding in intelligence analysis that complements but not replaces human expertise. The system combines natural language processing, argumentation-based reasoning, and natural language generation to produce intelligence reports from social media data, and to record the process of forming hypotheses from relationships among information. In this paper, we show how CISpaces.org meets the desirable requirements elicited from senior professionals, and demonstrate its usage and capabilities to support analysts in delivering effective and tailored intelligence to decision makers.
2018-09-11T00:00:00Z
Cerutti, Federico
Norman, Timothy J.
Toniolo, Alice
Middleton, Stuart E.
We introduce CISpaces.org, a tool to support situational understanding in intelligence analysis that complements but not replaces human expertise. The system combines natural language processing, argumentation-based reasoning, and natural language generation to produce intelligence reports from social media data, and to record the process of forming hypotheses from relationships among information. In this paper, we show how CISpaces.org meets the desirable requirements elicited from senior professionals, and demonstrate its usage and capabilities to support analysts in delivering effective and tailored intelligence to decision makers.
Querying metric spaces with bit operations
Connor, Richard
Dearle, Alan
https://hdl.handle.net/10023/16177
2023-04-19T00:43:06Z
2018-01-01T00:00:00Z
Metric search techniques can be usefully characterised by the time at which distance calculations are performed during a query. Most exact search mechanisms use a “just-in-time” approach where distances are calculated as part of a navigational strategy. An alternative is to use a “one-time” approach, where distances to a fixed set of reference objects are calculated at the start of each query. These distances are typically used to re-cast data and queries into a different space where querying is more efficient, allowing an approximate solution to be obtained. In this paper we use a “one-time” approach for an exact search mechanism. A fixed set of reference objects is used to define a large set of regions within the original space, and each query is assessed with respect to the definition of these regions. Data is then accessed if, and only if, it is useful for the calculation of the query solution. As dimensionality increases, the number of defined regions must increase, but the memory required for the exclusion calculation does not. We show that the technique gives excellent performance over the SISAP benchmark data sets, and most interestingly we show how increases in dimensionality may be countered by relatively modest increases in the number of reference objects used.
Funding: This work was supported by ESRC grant ES/L007487/1 “Administrative Data Research Centre—Scotland".
2018-01-01T00:00:00Z
Connor, Richard
Dearle, Alan
Metric search techniques can be usefully characterised by the time at which distance calculations are performed during a query. Most exact search mechanisms use a “just-in-time” approach where distances are calculated as part of a navigational strategy. An alternative is to use a “one-time” approach, where distances to a fixed set of reference objects are calculated at the start of each query. These distances are typically used to re-cast data and queries into a different space where querying is more efficient, allowing an approximate solution to be obtained. In this paper we use a “one-time” approach for an exact search mechanism. A fixed set of reference objects is used to define a large set of regions within the original space, and each query is assessed with respect to the definition of these regions. Data is then accessed if, and only if, it is useful for the calculation of the query solution. As dimensionality increases, the number of defined regions must increase, but the memory required for the exclusion calculation does not. We show that the technique gives excellent performance over the SISAP benchmark data sets, and most interestingly we show how increases in dimensionality may be countered by relatively modest increases in the number of reference objects used.
Biologically inspired vision for human-robot interaction
Saleiro, Mario
Farrajota, Miguel
Terzić, Kasim
Krishna, Sai
Rodrigues, João M.F.
du Buf, J. M.Hans
https://hdl.handle.net/10023/15958
2023-01-17T10:30:05Z
2015-01-01T00:00:00Z
Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.
2015-01-01T00:00:00Z
Saleiro, Mario
Farrajota, Miguel
Terzić, Kasim
Krishna, Sai
Rodrigues, João M.F.
du Buf, J. M.Hans
Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.
A parametric spectral model for texture-based salience
Terzić, Kasim
Krishna, Sai
Du Buf, J. M.H.
https://hdl.handle.net/10023/15957
2023-01-17T10:30:03Z
2015-01-01T00:00:00Z
We present a novel saliency mechanism based on texture. Local texture at each pixel is characterised by the 2D spectrum obtained from oriented Gabor filters. We then apply a parametric model and describe the texture at each pixel by a combination of two 1D Gaussian approximations. This results in a simple model which consists of only four parameters. These four parameters are then used as feature channels and standard Difference-of-Gaussian blob detection is applied in order to detect salient areas in the image, similar to the Itti and Koch model. Finally, a diffusion process is used to sharpen the resulting regions. Evaluation on a large saliency dataset shows a significant improvement of our method over the baseline Itti and Koch model.
2015-01-01T00:00:00Z
Terzić, Kasim
Krishna, Sai
Du Buf, J. M.H.
We present a novel saliency mechanism based on texture. Local texture at each pixel is characterised by the 2D spectrum obtained from oriented Gabor filters. We then apply a parametric model and describe the texture at each pixel by a combination of two 1D Gaussian approximations. This results in a simple model which consists of only four parameters. These four parameters are then used as feature channels and standard Difference-of-Gaussian blob detection is applied in order to detect salient areas in the image, similar to the Itti and Koch model. Finally, a diffusion process is used to sharpen the resulting regions. Evaluation on a large saliency dataset shows a significant improvement of our method over the baseline Itti and Koch model.
Ten simple rules for measuring the impact of workshops
Sufi, Shoaib
Nenadic, Aleksandra
Silva, Raniere
Duckles, Beth
Simera, Iveta
de Beyer, Jennifer A.
Struthers, Caroline
Nurmikko-Fuller, Terhi
Bellis, Louisa
Miah, Wadud
Wilde, Adriana
Emsley, Iain
Philippe, Olivier
Balzano, Melissa
Coelho, Sara
Ford, Heather
Jones, Catherine
Higgins, Vanessa
https://hdl.handle.net/10023/15919
2022-04-11T13:30:49Z
2018-08-30T00:00:00Z
Workshops are used to explore a specific topic, transfer knowledge, solve identified problems or create something new. In funded research projects and other research endeavours, workshops are the mechanism to gather the wider project, community or interested people together around a particular topic. However, natural questions arise: how do we measure the impact of these workshops? Do we know whether they are meeting the goals and objectives we set for them? What indicators should we use? In response to these questions, this paper will outline rules that will improve the measurement of the impact of workshops.
SS, AN, RS, IE, and OP acknowledge the support of EPSRC, BBSRC and ESRC Grant EP/N006410/1 for the UK Software Sustainability Institute, http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef=EP/N006410/1. IS, CS, and JdB acknowledges support from Cancer Research UK (grant C5529/A16895).
2018-08-30T00:00:00Z
Sufi, Shoaib
Nenadic, Aleksandra
Silva, Raniere
Duckles, Beth
Simera, Iveta
de Beyer, Jennifer A.
Struthers, Caroline
Nurmikko-Fuller, Terhi
Bellis, Louisa
Miah, Wadud
Wilde, Adriana
Emsley, Iain
Philippe, Olivier
Balzano, Melissa
Coelho, Sara
Ford, Heather
Jones, Catherine
Higgins, Vanessa
Workshops are used to explore a specific topic, transfer knowledge, solve identified problems or create something new. In funded research projects and other research endeavours, workshops are the mechanism to gather the wider project, community or interested people together around a particular topic. However, natural questions arise: how do we measure the impact of these workshops? Do we know whether they are meeting the goals and objectives we set for them? What indicators should we use? In response to these questions, this paper will outline rules that will improve the measurement of the impact of workshops.
Automatic generation and selection of streamlined constraint models via Monte Carlo search on a model lattice
Spracklen, Patrick
Akgun, Ozgur
Miguel, Ian James
https://hdl.handle.net/10023/15894
2024-03-27T00:38:28Z
2018-01-01T00:00:00Z
Streamlined constraint reasoning is the addition of uninferred constraints to a constraint model to reduce the search space, while retaining at least one solution. Previously it has been established that it is possible to generate streamliners automatically from abstract constraint specifications in Essence and that effective combinations of streamliners can allow instances of much larger scale to be solved. A shortcoming of the previous approach was the crude exploration of the power set of all combinations using depth and breadth first search. We present a new approach based on Monte Carlo search over the lattice of streamlined models, which efficiently identifies effective streamliner combinations.
Funding: EPSRC EP/P015638/1.
2018-01-01T00:00:00Z
Spracklen, Patrick
Akgun, Ozgur
Miguel, Ian James
Streamlined constraint reasoning is the addition of uninferred constraints to a constraint model to reduce the search space, while retaining at least one solution. Previously it has been established that it is possible to generate streamliners automatically from abstract constraint specifications in Essence and that effective combinations of streamliners can allow instances of much larger scale to be solved. A shortcoming of the previous approach was the crude exploration of the power set of all combinations using depth and breadth first search. We present a new approach based on Monte Carlo search over the lattice of streamlined models, which efficiently identifies effective streamliner combinations.
Fidelity perception of 3D models on the web
Bakri, Hussein
Miller, Alan Henry David
Oliver, Iain Angus
https://hdl.handle.net/10023/15483
2023-01-22T12:30:31Z
2018-01-01T00:00:00Z
Cultural heritage artefacts act as a gateway helping people learn about their social traditions and history. However, preserving these artefacts faces many difficulties, including potential destruction or damage from global warming, wars and conflicts, and degradation from day-to-day use. In addition, artefacts can only be present in one place at a time, and many of them can not be exhibited due to the limited physical space of museums. The digital domain offers opportunities to capture and represent the form and texture of these artefacts and to overcome the previously mentioned constraints by allowing people to access and interact with them on multiple platforms (mobile devices, tablets and personal computers) and network regimes. Through two experiments we study the subjective perception of the fidelity of 3D models in web browsers in order to discover perceptible resolution thresholds. This helps us create models of reasonable graphical complexity that could be fetched on the biggest range of end devices. It also enables us to design systems which efficiently optimise the user experience by adapting their behaviour based upon user perception, model characteristics and digital infrastructure.
2018-01-01T00:00:00Z
Bakri, Hussein
Miller, Alan Henry David
Oliver, Iain Angus
Cultural heritage artefacts act as a gateway helping people learn about their social traditions and history. However, preserving these artefacts faces many difficulties, including potential destruction or damage from global warming, wars and conflicts, and degradation from day-to-day use. In addition, artefacts can only be present in one place at a time, and many of them can not be exhibited due to the limited physical space of museums. The digital domain offers opportunities to capture and represent the form and texture of these artefacts and to overcome the previously mentioned constraints by allowing people to access and interact with them on multiple platforms (mobile devices, tablets and personal computers) and network regimes. Through two experiments we study the subjective perception of the fidelity of 3D models in web browsers in order to discover perceptible resolution thresholds. This helps us create models of reasonable graphical complexity that could be fetched on the biggest range of end devices. It also enables us to design systems which efficiently optimise the user experience by adapting their behaviour based upon user perception, model characteristics and digital infrastructure.
Using metric space indexing for complete and efficient record linkage
Akgün, Özgür
Dearle, Alan
Kirby, Graham Njal Cameron
Christen, Peter
https://hdl.handle.net/10023/15181
2024-02-15T00:36:57Z
2018-01-01T00:00:00Z
Record linkage is the process of identifying records that refer to the same real-world entities in situations where entity identifiers are unavailable. Records are linked on the basis of similarity between common attributes, with every pair being classified as a link or non-link depending on their similarity. Linkage is usually performed in a three-step process: first, groups of similar candidate records are identified using indexing, then pairs within the same group are compared in more detail, and finally classified. Even state-of-the-art indexing techniques, such as locality sensitive hashing, have potential drawbacks. They may fail to group together some true matching records with high similarity, or they may group records with low similarity, leading to high computational overhead. We propose using metric space indexing (MSI) to perform complete linkage, resulting in a parameter-free process combining indexing, comparison and classification into a single step delivering complete and efficient record linkage. An evaluation on real-world data from several domains shows that linkage using MSI can yield better quality than current indexing techniques, with similar execution cost, without the need for domain knowledge or trial and error to configure the process.
2018-01-01T00:00:00Z
Akgün, Özgür
Dearle, Alan
Kirby, Graham Njal Cameron
Christen, Peter
Record linkage is the process of identifying records that refer to the same real-world entities in situations where entity identifiers are unavailable. Records are linked on the basis of similarity between common attributes, with every pair being classified as a link or non-link depending on their similarity. Linkage is usually performed in a three-step process: first, groups of similar candidate records are identified using indexing, then pairs within the same group are compared in more detail, and finally classified. Even state-of-the-art indexing techniques, such as locality sensitive hashing, have potential drawbacks. They may fail to group together some true matching records with high similarity, or they may group records with low similarity, leading to high computational overhead. We propose using metric space indexing (MSI) to perform complete linkage, resulting in a parameter-free process combining indexing, comparison and classification into a single step delivering complete and efficient record linkage. An evaluation on real-world data from several domains shows that linkage using MSI can yield better quality than current indexing techniques, with similar execution cost, without the need for domain knowledge or trial and error to configure the process.
Guest editorial: High-level programming for heterogeneous parallel systems
Brown, Christopher Mark
https://hdl.handle.net/10023/13739
2024-02-27T00:43:25Z
2018-05-18T00:00:00Z
2018-05-18T00:00:00Z
Brown, Christopher Mark
Change blindness in proximity-aware mobile interfaces
Brock, Michael Oliver
Quigley, Aaron John
Kristensson, Per Ola
https://hdl.handle.net/10023/13179
2023-04-19T00:42:24Z
2018-04-21T00:00:00Z
Interface designs on both small and large displays can encourage people to alter their physical distance to the display. Mobile devices support this form of interaction naturally, as the user can move the device closer or further away as needed. The current generation of mobile devices can employ computer vision, depth sensing and other inference methods to determine the distance between the user and the display. Once known, a system can adapt the rendering of display content accordingly and enable proximity-aware mobile interfaces.The dominant method of exploiting proximity-aware interfaces is to remove or superimpose visual information. In this paper, we investigate change blindness in such interfaces. We present the results of two studies. In our first study we show that a proximity-aware mobile interface results in significantly more change blindness errors than a non-moving interface. The absolute difference in error rates was 13.7%. In our second study we show that within a proximity-aware mobile interface, gradual changes induce significantly more change blindness errors than instant changes confirming expected change blindness behavior. Based on our results we discuss the implications of either exploiting change blindness effects or mitigating them when designing mobile proximity-aware interfaces.
Funding: Google Faculty award and EPSRC grants EP/N010558/1 and EP/N014278/1 (P.O.K.).
2018-04-21T00:00:00Z
Brock, Michael Oliver
Quigley, Aaron John
Kristensson, Per Ola
Interface designs on both small and large displays can encourage people to alter their physical distance to the display. Mobile devices support this form of interaction naturally, as the user can move the device closer or further away as needed. The current generation of mobile devices can employ computer vision, depth sensing and other inference methods to determine the distance between the user and the display. Once known, a system can adapt the rendering of display content accordingly and enable proximity-aware mobile interfaces.The dominant method of exploiting proximity-aware interfaces is to remove or superimpose visual information. In this paper, we investigate change blindness in such interfaces. We present the results of two studies. In our first study we show that a proximity-aware mobile interface results in significantly more change blindness errors than a non-moving interface. The absolute difference in error rates was 13.7%. In our second study we show that within a proximity-aware mobile interface, gradual changes induce significantly more change blindness errors than instant changes confirming expected change blindness behavior. Based on our results we discuss the implications of either exploiting change blindness effects or mitigating them when designing mobile proximity-aware interfaces.
TAPping into mental models with blocks
Rough, D.
Quigley, A.
https://hdl.handle.net/10023/13137
2023-04-19T00:42:44Z
2017-10-10T00:00:00Z
Trigger-Action Programming (TAP) has been shown to support end-users' rule-based mental models of context-aware applications. However, when desired behaviours increase in complexity, this can lead to ambiguity that confuses events, states, and how they can be combined in meaningful ways. Blocks programming could provide a solution, through constrained editing of visual triggers, conditions and actions. We observed slips and mistakes by users performing TAP with Jeeves, our domain-specific blocks environment, and propose solutions.
2017-10-10T00:00:00Z
Rough, D.
Quigley, A.
Trigger-Action Programming (TAP) has been shown to support end-users' rule-based mental models of context-aware applications. However, when desired behaviours increase in complexity, this can lead to ambiguity that confuses events, states, and how they can be combined in meaningful ways. Blocks programming could provide a solution, through constrained editing of visual triggers, conditions and actions. We observed slips and mistakes by users performing TAP with Jeeves, our domain-specific blocks environment, and propose solutions.
Knowledge-based interoperability for mathematical software systems
Kohlhase, Michael
De Feo, Luca
Müller, Dennis
Pfeiffer, Markus Johannes
Rabe, Florian
Thiéry, Nicolas
Vasilyev, Victor
Wiesing, Tom
https://hdl.handle.net/10023/12491
2023-04-19T00:42:12Z
2017-01-01T00:00:00Z
There is a large ecosystem of mathematical software systems. Individually, these are optimized for particular domains and functionalities, and together they cover many needs of practical and theoretical mathematics. However, each system specializes on one area, and it remains very difficult to solve problems that need to involve multiple systems. Some integrations exist, but the are ad-hoc and have scalability and maintainability issues. In particular, there is not yet an interoperability layer that combines the various systems into a virtual research environment (VRE) for mathematics. The OpenDreamKit project aims at building a toolkit for such VREs. It suggests using a central system-agnostic formalization of mathematics (Math-in-the-Middle, MitM) as the needed interoperability layer. In this paper, we conduct the first major case study that instantiates the MitM paradigm for a concrete domain as well as a concrete set of systems. Specifically, we integrate GAP, Sage, and Singular to perform computation in group and ring theory. Our work involves massive practical efforts, including a novel formalization of computational group theory, improvements to the involved software systems, and a novel mediating system that sits at the center of a star-shaped integration layout between mathematical software systems.
Funding: OpenDreamKit Horizon 2020 European Research Infrastructures project (#676541) and DFG project RA-18723-1 OAF.
2017-01-01T00:00:00Z
Kohlhase, Michael
De Feo, Luca
Müller, Dennis
Pfeiffer, Markus Johannes
Rabe, Florian
Thiéry, Nicolas
Vasilyev, Victor
Wiesing, Tom
There is a large ecosystem of mathematical software systems. Individually, these are optimized for particular domains and functionalities, and together they cover many needs of practical and theoretical mathematics. However, each system specializes on one area, and it remains very difficult to solve problems that need to involve multiple systems. Some integrations exist, but the are ad-hoc and have scalability and maintainability issues. In particular, there is not yet an interoperability layer that combines the various systems into a virtual research environment (VRE) for mathematics. The OpenDreamKit project aims at building a toolkit for such VREs. It suggests using a central system-agnostic formalization of mathematics (Math-in-the-Middle, MitM) as the needed interoperability layer. In this paper, we conduct the first major case study that instantiates the MitM paradigm for a concrete domain as well as a concrete set of systems. Specifically, we integrate GAP, Sage, and Singular to perform computation in group and ring theory. Our work involves massive practical efforts, including a novel formalization of computational group theory, improvements to the involved software systems, and a novel mediating system that sits at the center of a star-shaped integration layout between mathematical software systems.
Radar sensing in human-computer interaction
Yeo, Hui-shyong
Quigley, Aaron
https://hdl.handle.net/10023/12478
2023-04-18T23:44:07Z
2018-01-01T00:00:00Z
2018-01-01T00:00:00Z
Yeo, Hui-shyong
Quigley, Aaron
Plug and Play Bench : simplifying big data benchmarking using containers
Ceesay, Sheriffo
Barker, Adam David
Varghese, Blesson
https://hdl.handle.net/10023/12315
2023-04-19T00:42:21Z
2017-12-11T00:00:00Z
The recent boom of big data, coupled with the challenges of its processing and storage gave rise to the development of distributed data processing and storage paradigms like MapReduce, Spark, and NoSQL databases. With the advent of cloud computing, processing and storing such massive datasets on clusters of machines is now feasible with ease. However, there are limited tools and approaches, which users can rely on to gauge and comprehend the performance of their big data applications deployed locally on clusters, or in the cloud. Researchers have started exploring this area by providing benchmarking suites suitable for big data applications. However, many of these tools are fragmented, complex to deploy and manage, and do not provide transparency with respect to the monetary cost of benchmarking an application. In this paper, we present Plug And Play Bench (PAPB1): aninfrastructure aware abstraction built to integrate and simplifythe deployment of big data benchmarking tools on clusters of machines. PAPB automates the tedious process of installing, configuring and executing common big data benchmark work-loads by containerising the tools and settings based on the underlying cluster deployment framework. Our proof of concept implementation utilises HiBench as the benchmark suite, HDP as the cluster deployment framework and Azure as the cloud platform. The paper further illustrates the inclusion of cost metrics based on the underlying Microsoft Azure cloud platform.
This research was supported by a Microsoft Azure Award.
2017-12-11T00:00:00Z
Ceesay, Sheriffo
Barker, Adam David
Varghese, Blesson
The recent boom of big data, coupled with the challenges of its processing and storage gave rise to the development of distributed data processing and storage paradigms like MapReduce, Spark, and NoSQL databases. With the advent of cloud computing, processing and storing such massive datasets on clusters of machines is now feasible with ease. However, there are limited tools and approaches, which users can rely on to gauge and comprehend the performance of their big data applications deployed locally on clusters, or in the cloud. Researchers have started exploring this area by providing benchmarking suites suitable for big data applications. However, many of these tools are fragmented, complex to deploy and manage, and do not provide transparency with respect to the monetary cost of benchmarking an application. In this paper, we present Plug And Play Bench (PAPB1): aninfrastructure aware abstraction built to integrate and simplifythe deployment of big data benchmarking tools on clusters of machines. PAPB automates the tedious process of installing, configuring and executing common big data benchmark work-loads by containerising the tools and settings based on the underlying cluster deployment framework. Our proof of concept implementation utilises HiBench as the benchmark suite, HDP as the cluster deployment framework and Azure as the cloud platform. The paper further illustrates the inclusion of cost metrics based on the underlying Microsoft Azure cloud platform.
Reprogramming embedded systems at run-time
Oliver, Richard
Wilde, Adriana
Zaluska, Ed
https://hdl.handle.net/10023/12279
2022-04-14T20:30:51Z
2014-09-02T00:00:00Z
The dynamic re-programming of embedded systems is a long-standing problem in the field. With the advent of wireless sensor networks and the 'Internet of Things' it has now become necessary to be able to reprogram at run-time due to the difficulty of gaining access to such systems once deployed. The issues of power consumption, flexibility, and operating system protections are examined for a range of approaches, and a critical comparison is given. A combination of approaches is recommended for the implementation of real-world systems and areas where further work is required are highlighted.
2014-09-02T00:00:00Z
Oliver, Richard
Wilde, Adriana
Zaluska, Ed
The dynamic re-programming of embedded systems is a long-standing problem in the field. With the advent of wireless sensor networks and the 'Internet of Things' it has now become necessary to be able to reprogram at run-time due to the difficulty of gaining access to such systems once deployed. The issues of power consumption, flexibility, and operating system protections are examined for a range of approaches, and a critical comparison is given. A combination of approaches is recommended for the implementation of real-world systems and areas where further work is required are highlighted.
Automatic vertebrae localization from CT scans using volumetric descriptors
Karsten, Juan
Arandelovic, Ognjen
https://hdl.handle.net/10023/11889
2023-04-19T00:41:50Z
2017-09-14T00:00:00Z
The localization and identification of vertebrae in spinal CT images plays an important role in many clinical applications, such as spinal disease diagnosis, surgery planning, and post-surgery assessment. However, automatic vertebrae localization presents numerous challenges due to partial visibility, appearance similarity of different vertebrae, varying data quality, and the presence of pathologies. Most existing methods require prior information on which vertebrae are present in a scan, and perform poorly on pathological cases, making them of little practical value. In this paper we describe three novel types of local information descriptors which are used to build more complex contextual features, and train a random forest classifier. The three features are progressively more complex, systematically addressing a greater number of limitations of the current state of the art.
2017-09-14T00:00:00Z
Karsten, Juan
Arandelovic, Ognjen
The localization and identification of vertebrae in spinal CT images plays an important role in many clinical applications, such as spinal disease diagnosis, surgery planning, and post-surgery assessment. However, automatic vertebrae localization presents numerous challenges due to partial visibility, appearance similarity of different vertebrae, varying data quality, and the presence of pathologies. Most existing methods require prior information on which vertebrae are present in a scan, and perform poorly on pathological cases, making them of little practical value. In this paper we describe three novel types of local information descriptors which are used to build more complex contextual features, and train a random forest classifier. The three features are progressively more complex, systematically addressing a greater number of limitations of the current state of the art.
Two variants of the froidure-pin algorithm for finite semigroups
Jonusas, Julius
Mitchell, J. D.
Pfeiffer, M.
https://hdl.handle.net/10023/11879
2023-04-18T23:36:18Z
2018-02-08T00:00:00Z
In this paper, we present two algorithms based on the Froidure-Pin Algorithm for computing the structure of a finite semigroup from a generating set. As was the case with the original algorithm of Froidure and Pin, the algorithms presented here produce the left and right Cayley graphs, a confluent terminating rewriting system, and a reduced word of the rewriting system for every element of the semigroup. If U is any semigroup, and A is a subset of U, then we denote by <A> the least subsemigroup of U containing A. If B is any other subset of U, then, roughly speaking, the first algorithm we present describes how to use any information about <A>, that has been found using the Froidure-Pin Algorithm, to compute the semigroup <A∪B>. More precisely, we describe the data structure for a finite semigroup S given by Froidure and Pin, and how to obtain such a data structure for <A∪B> from that for <A>. The second algorithm is a lock-free concurrent version of the Froidure-Pin Algorithm.
2018-02-08T00:00:00Z
Jonusas, Julius
Mitchell, J. D.
Pfeiffer, M.
In this paper, we present two algorithms based on the Froidure-Pin Algorithm for computing the structure of a finite semigroup from a generating set. As was the case with the original algorithm of Froidure and Pin, the algorithms presented here produce the left and right Cayley graphs, a confluent terminating rewriting system, and a reduced word of the rewriting system for every element of the semigroup. If U is any semigroup, and A is a subset of U, then we denote by <A> the least subsemigroup of U containing A. If B is any other subset of U, then, roughly speaking, the first algorithm we present describes how to use any information about <A>, that has been found using the Froidure-Pin Algorithm, to compute the semigroup <A∪B>. More precisely, we describe the data structure for a finite semigroup S given by Froidure and Pin, and how to obtain such a data structure for <A∪B> from that for <A>. The second algorithm is a lock-free concurrent version of the Froidure-Pin Algorithm.
Verification of a lazy cache coherence protocol against a weak memory model
Banks, Christopher
Elver, Marco
Hoffmann, Ruth
Sarkar, Susmit
Jackson, Paul
Nagarajan, Vijay
https://hdl.handle.net/10023/11870
2023-01-03T11:30:15Z
2017-10-02T00:00:00Z
In this paper, we verify a modern lazy cache coherence protocol, TSO-CC, against the memory consistency model it was designed for, TSO. We achieve this by first showing a weak simulation relation between TSO-CC (with a fixed number of processors) and a novel finite-state operational model which exhibits the laziness of TSO-CC and satisfies TSO. We then extend this by an existing parameterisation technique, allowing verification for an unbounded number of processors. The approach is executed entirely within a model checker, no external tool is required and very little in-depth knowledge of formal verification methods is required of the verifier.
Funding: EPSRC grant EP/M027317/1
2017-10-02T00:00:00Z
Banks, Christopher
Elver, Marco
Hoffmann, Ruth
Sarkar, Susmit
Jackson, Paul
Nagarajan, Vijay
In this paper, we verify a modern lazy cache coherence protocol, TSO-CC, against the memory consistency model it was designed for, TSO. We achieve this by first showing a weak simulation relation between TSO-CC (with a fixed number of processors) and a novel finite-state operational model which exhibits the laziness of TSO-CC and satisfies TSO. We then extend this by an existing parameterisation technique, allowing verification for an unbounded number of processors. The approach is executed entirely within a model checker, no external tool is required and very little in-depth knowledge of formal verification methods is required of the verifier.
Overcoming mental blocks : a blocks-based approach to Experience Sampling studies
Rough, Daniel John
Quigley, Aaron John
https://hdl.handle.net/10023/11698
2023-04-19T00:42:05Z
2017-12-01T00:00:00Z
Experience Sampling Method (ESM) studies repeatedly survey participants on their behaviours and experiences as they go about their everyday lives. Smartphones afford an ideal platform for ESM study applications as devices seldom leave their users, and can automatically sense surrounding context to augment subjective survey responses. ESM studies are employed in fields such as psychology and social science where researchers are not necessarily programmers and require tools for application creation. Previous tools using web forms, text files, or flowchart paradigms are either insufficient to model the potential complexity of study protocols, or fail to provide a low threshold to entry. We demonstrate that blocks programming simultaneously lowers the barriers to creating simple study protocols, while enabling the creation of increasingly sophisticated protocols. We discuss the design of Jeeves, our blocks-based environment for ESM studies, and explain advantages that blocks afford in ESM study design.
2017-12-01T00:00:00Z
Rough, Daniel John
Quigley, Aaron John
Experience Sampling Method (ESM) studies repeatedly survey participants on their behaviours and experiences as they go about their everyday lives. Smartphones afford an ideal platform for ESM study applications as devices seldom leave their users, and can automatically sense surrounding context to augment subjective survey responses. ESM studies are employed in fields such as psychology and social science where researchers are not necessarily programmers and require tools for application creation. Previous tools using web forms, text files, or flowchart paradigms are either insufficient to model the potential complexity of study protocols, or fail to provide a low threshold to entry. We demonstrate that blocks programming simultaneously lowers the barriers to creating simple study protocols, while enabling the creation of increasingly sophisticated protocols. We discuss the design of Jeeves, our blocks-based environment for ESM studies, and explain advantages that blocks afford in ESM study design.
Intuitive and interpretable visual communication of a complex statistical model of disease progression and risk
Li, Jieyi
Arandelovic, Ognjen
https://hdl.handle.net/10023/11489
2023-04-19T00:42:03Z
2017-07-11T00:00:00Z
Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories,or display cumulative long term prognostics, in an intuitive and easily interpretable manner.
2017-07-11T00:00:00Z
Li, Jieyi
Arandelovic, Ognjen
Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories,or display cumulative long term prognostics, in an intuitive and easily interpretable manner.
Seastar: a comprehensive framework for telemetry data in HPC environments
Weidner, Ole
Barker, Adam David
Atkinson, Malcolm
https://hdl.handle.net/10023/10908
2023-04-19T00:41:55Z
2017-06-27T00:00:00Z
A large number of 2nd generation high-performance computing applications and services rely on adaptive and dynamic architectures and execution strategies to run efficiently,resiliently, and at scale on today’s HPC infrastructures. They require information about applications and their environment to steer and optimize execution. We define this information as telemetry data. Current HPC platforms do not provide the infrastructure,interfaces and conceptual models to collect, store, analyze,and access such data. Today, applications depend on application and platform specific techniques for collecting telemetry data; introducing significant development overheads that inhibit portability and mobility. The development and adoption of adaptive, context-aware strategies is thereby impaired. To facilitate 2nd generation applications,more efficient application development, and swift adoption of adaptive applications in production, a comprehensive framework for telemetry data management must be provided by future HPC systems and services. We introduce Seastar, a conceptual model and a software framework to collect, store, analyze, and exploit streams of telemetry data generated by HPC systems and their applications. We show how Seastar can be integrated with HPC platform architectures and how it enables common application execution strategies.
2017-06-27T00:00:00Z
Weidner, Ole
Barker, Adam David
Atkinson, Malcolm
A large number of 2nd generation high-performance computing applications and services rely on adaptive and dynamic architectures and execution strategies to run efficiently,resiliently, and at scale on today’s HPC infrastructures. They require information about applications and their environment to steer and optimize execution. We define this information as telemetry data. Current HPC platforms do not provide the infrastructure,interfaces and conceptual models to collect, store, analyze,and access such data. Today, applications depend on application and platform specific techniques for collecting telemetry data; introducing significant development overheads that inhibit portability and mobility. The development and adoption of adaptive, context-aware strategies is thereby impaired. To facilitate 2nd generation applications,more efficient application development, and swift adoption of adaptive applications in production, a comprehensive framework for telemetry data management must be provided by future HPC systems and services. We introduce Seastar, a conceptual model and a software framework to collect, store, analyze, and exploit streams of telemetry data generated by HPC systems and their applications. We show how Seastar can be integrated with HPC platform architectures and how it enables common application execution strategies.
Visualization of patient specific disease risk prediction
Osuala, Richard
Arandelovic, Ognjen
https://hdl.handle.net/10023/10699
2023-04-19T00:41:48Z
2017-02-16T00:00:00Z
The increasing trend of systematic collection of medical data (diagnoses, hospital admission emergencies, blood test results, scans etc) by health care providers offers an unprecedented opportunity for the application of modern data mining, pattern recognition, and machine learning algorithms. The ultimate aim is invariably that of improving outcomes, be it directly or indirectly. Notwithstanding the successes of recent research efforts in this realm, a major obstacle of making the developed models usable by medical professionals (rather than computer scientists or statisticians) remains largely unaddressed. Yet, a mounting amount of evidence shows that the ability to understanding and easily use novel technologies is a major factor governing how widely adopted by the target users (doctors, nurses, and patients, amongst others) they are likely to be. In this work we address this technical gap. In particular, we describe a portable, web based interface that allows health care professionals to interact with recently developed machine learning and data driven prognostic algorithms. Our application interfaces a statistical disease progression model and displays its predictions in an intuitive and readily understandable manner. Different types of geometric primitives and their visual properties (such as size or colour), are used to represent abstract quantities such as probability density functions, the rate of change of relative probabilities, and a series of other relevant statistics which the heath care professional can use to explore patients' risk factors or provide personalized, evidence and data driven incentivization to the patient.
2017-02-16T00:00:00Z
Osuala, Richard
Arandelovic, Ognjen
The increasing trend of systematic collection of medical data (diagnoses, hospital admission emergencies, blood test results, scans etc) by health care providers offers an unprecedented opportunity for the application of modern data mining, pattern recognition, and machine learning algorithms. The ultimate aim is invariably that of improving outcomes, be it directly or indirectly. Notwithstanding the successes of recent research efforts in this realm, a major obstacle of making the developed models usable by medical professionals (rather than computer scientists or statisticians) remains largely unaddressed. Yet, a mounting amount of evidence shows that the ability to understanding and easily use novel technologies is a major factor governing how widely adopted by the target users (doctors, nurses, and patients, amongst others) they are likely to be. In this work we address this technical gap. In particular, we describe a portable, web based interface that allows health care professionals to interact with recently developed machine learning and data driven prognostic algorithms. Our application interfaces a statistical disease progression model and displays its predictions in an intuitive and readily understandable manner. Different types of geometric primitives and their visual properties (such as size or colour), are used to represent abstract quantities such as probability density functions, the rate of change of relative probabilities, and a series of other relevant statistics which the heath care professional can use to explore patients' risk factors or provide personalized, evidence and data driven incentivization to the patient.
Light curve analysis from Kepler spacecraft collected data
Nigri, Eduardo
Arandelovic, Ognjen
https://hdl.handle.net/10023/10698
2023-04-19T00:41:48Z
2017-06-06T00:00:00Z
Although scarce, previous work on the application of machine learning and data mining techniques on large corpora of astronomical data has produced promising results. For example, on the task of detecting so-called Kepler objects of interest (KOIs), a range of different ‘off the shelf’ classifiers has demonstrated outstanding performance. These rather preliminary research efforts motivate further exploration of this data domain. In the present work we focus on the analysis of threshold crossing events (TCEs) extracted from photometric data acquired by the Kepler spacecraft. We show that the task of classifying TCEs as being erected by actual planetary transits as opposed to confounding astrophysical phenomena is significantly more challenging than that of KOI detection, with different classifiers exhibiting vastly different performances. Nevertheless,the best performing classifier type, the random forest, achieved excellent accuracy, correctly predicting in approximately 96% of the cases. Our results and analysis should illuminate further efforts into the development of more sophisticated, automatic techniques, and encourage additional work in the area.
The authors would like to thank CNPq-Brazil and the University of St Andrews for their kind support.
2017-06-06T00:00:00Z
Nigri, Eduardo
Arandelovic, Ognjen
Although scarce, previous work on the application of machine learning and data mining techniques on large corpora of astronomical data has produced promising results. For example, on the task of detecting so-called Kepler objects of interest (KOIs), a range of different ‘off the shelf’ classifiers has demonstrated outstanding performance. These rather preliminary research efforts motivate further exploration of this data domain. In the present work we focus on the analysis of threshold crossing events (TCEs) extracted from photometric data acquired by the Kepler spacecraft. We show that the task of classifying TCEs as being erected by actual planetary transits as opposed to confounding astrophysical phenomena is significantly more challenging than that of KOI detection, with different classifiers exhibiting vastly different performances. Nevertheless,the best performing classifier type, the random forest, achieved excellent accuracy, correctly predicting in approximately 96% of the cases. Our results and analysis should illuminate further efforts into the development of more sophisticated, automatic techniques, and encourage additional work in the area.
TiTAN: exploring midair text entry using freehand input
Yeo, Hui Shyong
Phang, Xiao-Shen
Ha, Taejin
Woo, Woontack
Quigley, Aaron John
https://hdl.handle.net/10023/10661
2022-04-06T11:30:27Z
2017-05-06T00:00:00Z
TiTAN is a spatial user interface that enables freehand,midair text entry with a distant display while only requiring a low-cost depth sensor. Our system aims to leverage one’s familiarity with the QWERTY layout. It allows users to input text, in midair, by mimicking the typing action they typically perform on a physical keyboard or touchscreen. Here, both hands and ten fingers are individually tracked, along with click action detection which enables a wide variety of interactions.We propose three midair text entry techniques and evaluate the TiTAN system with two different sensors.
2017-05-06T00:00:00Z
Yeo, Hui Shyong
Phang, Xiao-Shen
Ha, Taejin
Woo, Woontack
Quigley, Aaron John
TiTAN is a spatial user interface that enables freehand,midair text entry with a distant display while only requiring a low-cost depth sensor. Our system aims to leverage one’s familiarity with the QWERTY layout. It allows users to input text, in midair, by mimicking the typing action they typically perform on a physical keyboard or touchscreen. Here, both hands and ten fingers are individually tracked, along with click action detection which enables a wide variety of interactions.We propose three midair text entry techniques and evaluate the TiTAN system with two different sensors.
Impact of cell load on 5GHz IEEE 802.11 WLAN
Abu-Tair, Mamoun
Bhatti, Saleem Noel
https://hdl.handle.net/10023/10570
2023-04-26T00:24:04Z
2017-03-27T00:00:00Z
We have conducted an empirical study of the latest 5GHz IEEE 802.11 wireless LAN (WLAN) variants of 802.11n (5GHz) and 802.11ac (Wave 1), under different cell load conditions. We have considered typical configurations of both protocols on a Linux testbed. Under light load,there is no clear difference between 802.11n and 802.11ac in terms of performance and energy consumption. However, in some cases of high cell load, we have found that there may be a small advantage with 802.11ac. Overall, we conclude that there may be little benefit in upgrading from 802.11n (5GHz) to 802.11ac in its current offering, as the benefits may be too small.
2017-03-27T00:00:00Z
Abu-Tair, Mamoun
Bhatti, Saleem Noel
We have conducted an empirical study of the latest 5GHz IEEE 802.11 wireless LAN (WLAN) variants of 802.11n (5GHz) and 802.11ac (Wave 1), under different cell load conditions. We have considered typical configurations of both protocols on a Linux testbed. Under light load,there is no clear difference between 802.11n and 802.11ac in terms of performance and energy consumption. However, in some cases of high cell load, we have found that there may be a small advantage with 802.11ac. Overall, we conclude that there may be little benefit in upgrading from 802.11n (5GHz) to 802.11ac in its current offering, as the benefits may be too small.
Information and knowing when to forget it
Sharma, Rohit
Arandelovic, Ognjen
https://hdl.handle.net/10023/10505
2023-04-19T00:41:38Z
2017-05-14T00:00:00Z
In this paper we propose several novel approaches for incorporating forgetting mechanisms into sequential prediction based machine learning algorithms. The broad premise of our work, supported and motivated in part by recent findings stemming from neurology research on the development of human brains, is that knowledge acquisition and forgetting are complementary processes, and that learning can (perhaps unintuitively) benefit from the latter too. We demonstrate that if forgetting is implemented in a purposeful and date driven manner, there are a number of benefits which can be gained from discarding information. The framework we introduce is a general one and can be used with any baseline predictor of choice. Hence in this sense it is best described as a meta-algorithm. The method we described was developed through a series of steps which increase the adaptability of the model, while being data driven.We first discussed a weakly adaptive forgetting process which we termed passive forgetting. A fully adaptive framework, which we termed active forgetting was developed by enveloping a passive forgetting process with a monitoring, self-aware module which detects contextual changes and makes a statistically informed choice when the model parameters should be abruptly rather than gradually updated. The effectiveness of the proposed metaframework was demonstrated on a real world data set concerned with a challenge of major practical importance: that of predicting currency exchange rates. Our approach was shown to be highly effective, reducing prediction errors by nearly 40%.
2017-05-14T00:00:00Z
Sharma, Rohit
Arandelovic, Ognjen
In this paper we propose several novel approaches for incorporating forgetting mechanisms into sequential prediction based machine learning algorithms. The broad premise of our work, supported and motivated in part by recent findings stemming from neurology research on the development of human brains, is that knowledge acquisition and forgetting are complementary processes, and that learning can (perhaps unintuitively) benefit from the latter too. We demonstrate that if forgetting is implemented in a purposeful and date driven manner, there are a number of benefits which can be gained from discarding information. The framework we introduce is a general one and can be used with any baseline predictor of choice. Hence in this sense it is best described as a meta-algorithm. The method we described was developed through a series of steps which increase the adaptability of the model, while being data driven.We first discussed a weakly adaptive forgetting process which we termed passive forgetting. A fully adaptive framework, which we termed active forgetting was developed by enveloping a passive forgetting process with a monitoring, self-aware module which detects contextual changes and makes a statistically informed choice when the model parameters should be abruptly rather than gradually updated. The effectiveness of the proposed metaframework was demonstrated on a real world data set concerned with a challenge of major practical importance: that of predicting currency exchange rates. Our approach was shown to be highly effective, reducing prediction errors by nearly 40%.
Glycaemic index prediction : a pilot study of data linkage challenges and the application of machine learning
Li, Jingyuan
Arandelovic, Ognjen
https://hdl.handle.net/10023/10504
2023-04-26T00:24:07Z
2017-02-18T00:00:00Z
The glycaemic index (GI) is widely used to characterize the effect that a food has on blood glucose which is of major importance to diabetic individuals as well as the general population at large. At present, its applicability is severely limited by the labour involved in its measurement and the lack of understanding about how different foods interact to produce the GI of the meal comprising them. In this pilot study we examine if readily available biochemical properties of food scan be used to predict their GI, thus opening possibilities for practicable use of the GI in the management of blood glucose in everyday life. We also examine practical challenges in the cross-linking of food information sources collected by different organizations, and highlight the need for the development of a universal standard which would facilitate automatic and error free data integration.
2017-02-18T00:00:00Z
Li, Jingyuan
Arandelovic, Ognjen
The glycaemic index (GI) is widely used to characterize the effect that a food has on blood glucose which is of major importance to diabetic individuals as well as the general population at large. At present, its applicability is severely limited by the labour involved in its measurement and the lack of understanding about how different foods interact to produce the GI of the meal comprising them. In this pilot study we examine if readily available biochemical properties of food scan be used to predict their GI, thus opening possibilities for practicable use of the GI in the management of blood glucose in everyday life. We also examine practical challenges in the cross-linking of food information sources collected by different organizations, and highlight the need for the development of a universal standard which would facilitate automatic and error free data integration.
Reading small scalar data fields: color scales vs. Detail on Demand vs. FatFonts
Manteau, Constant
Nacenta, Miguel
Mauderer, Michael
https://hdl.handle.net/10023/10484
2023-04-19T00:41:40Z
2017-05-16T00:00:00Z
We empirically investigate the advantages and disadvantages of color- and digit-based methods to represent small scalar fields. We compare two types of color scales (one brightness-based and one that varies in hue, saturation and brightness) with an interactive tooltip that shows the scalar value on demand, and with a symbolic glyph-based approach (FatFonts). Three experiments tested three tasks: reading values, comparing values, and finding extrema. The results provide the first empirical comparisons of color scales with symbol-based techniques. The interactive tooltip enabled higher accuracy and shorter times than the color scales for reading values but showed slow completion times and low accuracy for value comparison and extrema finding tasks. The FatFonts technique showed better speed and accuracy for reading and value comparison, and high accuracy for the extrema finding task at the cost of being the slowest for this task.
2017-05-16T00:00:00Z
Manteau, Constant
Nacenta, Miguel
Mauderer, Michael
We empirically investigate the advantages and disadvantages of color- and digit-based methods to represent small scalar fields. We compare two types of color scales (one brightness-based and one that varies in hue, saturation and brightness) with an interactive tooltip that shows the scalar value on demand, and with a symbolic glyph-based approach (FatFonts). Three experiments tested three tasks: reading values, comparing values, and finding extrema. The results provide the first empirical comparisons of color scales with symbol-based techniques. The interactive tooltip enabled higher accuracy and shorter times than the color scales for reading values but showed slow completion times and low accuracy for value comparison and extrema finding tasks. The FatFonts technique showed better speed and accuracy for reading and value comparison, and high accuracy for the extrema finding task at the cost of being the slowest for this task.
Opportunistic visualization with iVoLVER
Méndez, Gonzalo Gabriel
Nacenta, Miguel A.
https://hdl.handle.net/10023/10161
2023-04-19T00:41:26Z
2016-11-08T00:00:00Z
Proposed as 'data analysis anywhere, anytime, from anything', Opportunistic Information Visualization (Opportu-Vis) [1] seeks to provide analytical support in scenarios where the data of interest is not explicitly available and has to be retrieved from digital artifacts that are not traditionally used as data sources. Examples include raster images, web pages, vector files, and photographs. This showpiece presents how iVoLVER, the Interactive Visual Language for Visualization Extraction and Reconstruction, provides support in such settings. We briefly describe the overall construction approach of the tool in scenarios where different digital artifacts are used to compose interactive visuals. All of this becomes possible by using the data extraction capabilities of iVoLVER together with the elements of its visual language.
2016-11-08T00:00:00Z
Méndez, Gonzalo Gabriel
Nacenta, Miguel A.
Proposed as 'data analysis anywhere, anytime, from anything', Opportunistic Information Visualization (Opportu-Vis) [1] seeks to provide analytical support in scenarios where the data of interest is not explicitly available and has to be retrieved from digital artifacts that are not traditionally used as data sources. Examples include raster images, web pages, vector files, and photographs. This showpiece presents how iVoLVER, the Interactive Visual Language for Visualization Extraction and Reconstruction, provides support in such settings. We briefly describe the overall construction approach of the tool in scenarios where different digital artifacts are used to compose interactive visuals. All of this becomes possible by using the data extraction capabilities of iVoLVER together with the elements of its visual language.
Algorithms for optimising heterogeneous Cloud virtual machine clusters
Thai, Long Thanh
Varghese, Blesson
Barker, Adam David
https://hdl.handle.net/10023/9950
2022-04-13T14:30:19Z
2016-12-12T00:00:00Z
It is challenging to execute an application in a heterogeneous cloud cluster, which consists of multiple types of virtual machines with different performance capabilities and prices. This paper aims to mitigate this challenge by proposing a scheduling mechanism to optimise the execution of Bag-of-Task jobs on a heterogeneous cloud cluster. The proposed scheduler considers two approaches to select suitable cloud resources for executing a user application while satisfying pre-defined Service Level Objectives (SLOs) both in terms of execution deadline and minimising monetary cost. Additionally, a mechanism for dynamic re-assignment of jobs during execution is presented to resolve potential violation of SLOs. Experimental studies are performed both in simulation and on a public cloud using real-world applications. The results highlight that our scheduling approaches result in cost saving of up to 31% in comparison to naive approaches that only employ a single type of virtual machine in a homogeneous cluster. Dynamic reassignment completely prevents deadline violation in the best-case and reduces deadline violations by 95% in the worst-case scenario.
This research was supported by an Amazon Web Services Education Research grant.
2016-12-12T00:00:00Z
Thai, Long Thanh
Varghese, Blesson
Barker, Adam David
It is challenging to execute an application in a heterogeneous cloud cluster, which consists of multiple types of virtual machines with different performance capabilities and prices. This paper aims to mitigate this challenge by proposing a scheduling mechanism to optimise the execution of Bag-of-Task jobs on a heterogeneous cloud cluster. The proposed scheduler considers two approaches to select suitable cloud resources for executing a user application while satisfying pre-defined Service Level Objectives (SLOs) both in terms of execution deadline and minimising monetary cost. Additionally, a mechanism for dynamic re-assignment of jobs during execution is presented to resolve potential violation of SLOs. Experimental studies are performed both in simulation and on a public cloud using real-world applications. The results highlight that our scheduling approaches result in cost saving of up to 31% in comparison to naive approaches that only employ a single type of virtual machine in a homogeneous cluster. Dynamic reassignment completely prevents deadline violation in the best-case and reduces deadline violations by 95% in the worst-case scenario.
Timing properties and correctness for structured parallel programs on x86-64 multicores
Hammond, Kevin
Brown, Christopher Mark
Sarkar, Susmit
https://hdl.handle.net/10023/9935
2023-01-03T11:30:13Z
2016-01-01T00:00:00Z
This paper determines correctness and timing properties for structured parallel programs on x86-64 multicores. Multicore architectures are increasingly common, but real architectures have unpredictable timing properties, and even correctness is not obvious above the relaxed-memory concurrency models that are enforced by commonly-used hardware. This paper takes a rigorous approach to correctness and timing properties, examining common locking protocols from first principles, and extending this through queues to structured parallel constructs. We prove functional correctness and derive simple timing models, and both extend for the first time from low-level primitives to high-level parallel patterns. Our derived high-level timing models for structured parallel programs allow us to accurately predict upper bounds on program execution times on x86-64 multicores.
2016-01-01T00:00:00Z
Hammond, Kevin
Brown, Christopher Mark
Sarkar, Susmit
This paper determines correctness and timing properties for structured parallel programs on x86-64 multicores. Multicore architectures are increasingly common, but real architectures have unpredictable timing properties, and even correctness is not obvious above the relaxed-memory concurrency models that are enforced by commonly-used hardware. This paper takes a rigorous approach to correctness and timing properties, examining common locking protocols from first principles, and extending this through queues to structured parallel constructs. We prove functional correctness and derive simple timing models, and both extend for the first time from low-level primitives to high-level parallel patterns. Our derived high-level timing models for structured parallel programs allow us to accurately predict upper bounds on program execution times on x86-64 multicores.
Achieving stable subspace clustering by post-processing generic clustering results
Pham, Duc-Son
Arandjelovic, Ognjen
Venkatesh, Svetha
https://hdl.handle.net/10023/9859
2022-05-03T13:30:35Z
2016-10-31T00:00:00Z
We propose an effective subspace selection scheme as a post-processing step to improve results obtained by sparse subspace clustering (SSC). Our method starts by the computation of stable subspaces using a novel random sampling scheme. Thus constructed preliminary subspaces are used to identify the initially incorrectly clustered data points and then to reassign them to more suitable clusters based on their goodness-of-fit to the preliminary model. To improve the robustness of the algorithm, we use a dominant nearest subspace classification scheme that controls the level of sensitivity against reassignment. We demonstrate that our algorithm is convergent and superior to the direct application of a generic alternative such as principal component analysis. On several popular datasets for motion segmentation and face clustering pervasively used in the sparse subspace clustering literature the proposed method is shown to reduce greatly the incidence of clustering errors while introducing negligible disturbance to the data points already correctly clustered.
2016-10-31T00:00:00Z
Pham, Duc-Son
Arandjelovic, Ognjen
Venkatesh, Svetha
We propose an effective subspace selection scheme as a post-processing step to improve results obtained by sparse subspace clustering (SSC). Our method starts by the computation of stable subspaces using a novel random sampling scheme. Thus constructed preliminary subspaces are used to identify the initially incorrectly clustered data points and then to reassign them to more suitable clusters based on their goodness-of-fit to the preliminary model. To improve the robustness of the algorithm, we use a dominant nearest subspace classification scheme that controls the level of sensitivity against reassignment. We demonstrate that our algorithm is convergent and superior to the direct application of a generic alternative such as principal component analysis. On several popular datasets for motion segmentation and face clustering pervasively used in the sparse subspace clustering literature the proposed method is shown to reduce greatly the incidence of clustering errors while introducing negligible disturbance to the data points already correctly clustered.
Towards sophisticated learning from EHRs : increasing prediction specificity and accuracy using clinically meaningful risk criteria
Vasiljeva, Ieva
Arandelovic, Ognjen
https://hdl.handle.net/10023/9856
2023-04-19T00:41:16Z
2016-08-16T00:00:00Z
Computer based analysis of Electronic Health Records (EHRs) has the potential to provide major novel insights of benefit both to specific individuals in the context of personalized medicine, as well as on the level of population-wide health care and policy. The present paper introduces a novel algorithm that uses machine learning for the discovery of longitudinal patterns in the diagnoses of diseases. Two key technical novelties are introduced: one in the form of a novel learning paradigm which enables greater learning specificity, and another in the form of a risk driven identification of confounding diagnoses. We present a series of experiments which demonstrate the effectiveness of the proposed techniques, and which reveal novel insights regarding the most promising future research directions.
2016-08-16T00:00:00Z
Vasiljeva, Ieva
Arandelovic, Ognjen
Computer based analysis of Electronic Health Records (EHRs) has the potential to provide major novel insights of benefit both to specific individuals in the context of personalized medicine, as well as on the level of population-wide health care and policy. The present paper introduces a novel algorithm that uses machine learning for the discovery of longitudinal patterns in the diagnoses of diseases. Two key technical novelties are introduced: one in the form of a novel learning paradigm which enables greater learning specificity, and another in the form of a risk driven identification of confounding diagnoses. We present a series of experiments which demonstrate the effectiveness of the proposed techniques, and which reveal novel insights regarding the most promising future research directions.
Identification of promising research directions using machine learning aided medical literature analysis
Andrei, Victor
Arandjelovic, Ognjen
https://hdl.handle.net/10023/9853
2023-04-19T00:41:17Z
2016-08-16T00:00:00Z
The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora, and of tracking complex temporal changes within it.
2016-08-16T00:00:00Z
Andrei, Victor
Arandjelovic, Ognjen
The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora, and of tracking complex temporal changes within it.
Predicting and optimizing image compression
Murashko, Oleksandr
Thomson, John Donald
Leather, Hugh
https://hdl.handle.net/10023/9668
2023-04-19T00:40:55Z
2016-10-01T00:00:00Z
Image compression is a core task for mobile devices, social media and cloud storage backend services. Key evaluation criteria for compression are: the quality of the output, the compression ratio achieved and the computational time (and energy) expended. Predicting the effectiveness of standard compression implementations like libjpeg and WebP on a novel image is challenging, and often leads to non-optimal compression. This paper presents a machine learning-based technique to accurately model the outcome of image compression for arbitrary new images in terms of quality and compression ratio, without requiring significant additional computational time and energy. Using this model, we can actively adapt the aggressiveness of compression on a per image basis to accurately fit user requirements, leading to a more optimal compression.
2016-10-01T00:00:00Z
Murashko, Oleksandr
Thomson, John Donald
Leather, Hugh
Image compression is a core task for mobile devices, social media and cloud storage backend services. Key evaluation criteria for compression are: the quality of the output, the compression ratio achieved and the computational time (and energy) expended. Predicting the effectiveness of standard compression implementations like libjpeg and WebP on a novel image is challenging, and often leads to non-optimal compression. This paper presents a machine learning-based technique to accurately model the outcome of image compression for arbitrary new images in terms of quality and compression ratio, without requiring significant additional computational time and energy. Using this model, we can actively adapt the aggressiveness of compression on a per image basis to accurately fit user requirements, leading to a more optimal compression.
WatchMI: applications of watch movement input on unmodified smartwatches
Yeo, Hui Shyong
Lee, Juyoung
Bianchi, Andrea
Quigley, Aaron John
https://hdl.handle.net/10023/9461
2022-04-06T11:30:20Z
2016-09-06T00:00:00Z
In this demo, we show that it is possible to enhance touch interaction on unmodified smartwatch to support continuous pressure touch, twist and pan gestures, by only analyzing the real-time data of Inertial Measurement Unit (IMU). Our evaluation results show that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed to a variety of smartwatches. We then showcase the potential of this work with seven example applications. During the demo session, users can try the prototype.
2016-09-06T00:00:00Z
Yeo, Hui Shyong
Lee, Juyoung
Bianchi, Andrea
Quigley, Aaron John
In this demo, we show that it is possible to enhance touch interaction on unmodified smartwatch to support continuous pressure touch, twist and pan gestures, by only analyzing the real-time data of Inertial Measurement Unit (IMU). Our evaluation results show that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed to a variety of smartwatches. We then showcase the potential of this work with seven example applications. During the demo session, users can try the prototype.
Client-side energy costs of video streaming
Ejembi, Oche
Bhatti, Saleem N.
https://hdl.handle.net/10023/9353
2023-04-26T00:23:55Z
2016-02-04T00:00:00Z
Through measurements on our testbed, we show how users of Netflix could make energy savings of up to 34% by adjusting video quality settings. We estimate the impacts of these quality settings on the energy consumption of client systems and the network. If users exercise choice in their video streaming habits, over 100 GWh of energy a year could be saved on a global scale. We discuss how providing energy usage information to users of digital video could enable them to make choices of video settings to reduce energy usage, and we estimate savings on associated electricity costs and carbon emissions.
2016-02-04T00:00:00Z
Ejembi, Oche
Bhatti, Saleem N.
Through measurements on our testbed, we show how users of Netflix could make energy savings of up to 34% by adjusting video quality settings. We estimate the impacts of these quality settings on the energy consumption of client systems and the network. If users exercise choice in their video streaming habits, over 100 GWh of energy a year could be saved on a global scale. We discuss how providing energy usage information to users of digital video could enable them to make choices of video settings to reduce energy usage, and we estimate savings on associated electricity costs and carbon emissions.
Descriptor transition tables for object retrieval using unconstrained cluttered video acquired using a consumer level handheld mobile device
Rieutort-Louis, Warren
Arandelovic, Ognjen
https://hdl.handle.net/10023/9201
2023-04-19T00:40:33Z
2016-11-03T00:00:00Z
Visual recognition and vision based retrieval of objects from large databases are tasks with a wide spectrum of potential applications. In this paper we propose a novel recognition method from video sequences suitable for retrieval from databases acquired in highly unconstrained conditions e.g. using a mobile consumer-level device such as a phone. On the lowest level, we represent each sequence as a 3D mesh of densely packed local appearance descriptors. While image plane geometry is captured implicitly by a large overlap of neighbouring regions from which the descriptors are extracted, 3D information is extracted by means of a descriptor transition table, learnt from a single sequence for each known gallery object. These allow us to connect local descriptors along the 3rd dimension (which corresponds to viewpoint changes), thus resulting in a set of variable length Markov chains for each video. The matching of two sets of such chains is formulated as a statistical hypothesis test, whereby a subset of each is chosen to maximize the likelihood that the corresponding video sequences show the same object. The effectiveness of the proposed algorithm is empirically evaluated on the Amsterdam Library of Object Images and a new highly challenging video data set acquired using a mobile phone. On both data sets our method is shown to be successful in recognition in the presence of background clutter and large viewpoint changes.
2016-11-03T00:00:00Z
Rieutort-Louis, Warren
Arandelovic, Ognjen
Visual recognition and vision based retrieval of objects from large databases are tasks with a wide spectrum of potential applications. In this paper we propose a novel recognition method from video sequences suitable for retrieval from databases acquired in highly unconstrained conditions e.g. using a mobile consumer-level device such as a phone. On the lowest level, we represent each sequence as a 3D mesh of densely packed local appearance descriptors. While image plane geometry is captured implicitly by a large overlap of neighbouring regions from which the descriptors are extracted, 3D information is extracted by means of a descriptor transition table, learnt from a single sequence for each known gallery object. These allow us to connect local descriptors along the 3rd dimension (which corresponds to viewpoint changes), thus resulting in a set of variable length Markov chains for each video. The matching of two sets of such chains is formulated as a statistical hypothesis test, whereby a subset of each is chosen to maximize the likelihood that the corresponding video sequences show the same object. The effectiveness of the proposed algorithm is empirically evaluated on the Amsterdam Library of Object Images and a new highly challenging video data set acquired using a mobile phone. On both data sets our method is shown to be successful in recognition in the presence of background clutter and large viewpoint changes.
Open Badges : a best-practice framework
Voogt, Lennert
Dow, Lisa
Dobson, Simon Andrew
https://hdl.handle.net/10023/9123
2023-04-19T00:40:16Z
2016-07-13T00:00:00Z
The widespread adoption of online education is severely challenged by issues of verifiability, reliability, security and credibility. Open Badges exist to address these challenges, but there is no consensus as to what constitutes best practices regarding the implementation of an Open Badge system within an educational context. In this paper we survey the current landscape of Open Badges from educational and technological perspectives. We analyze a broad set of openly-reported pilot projects and case studies, and derive a comprehensive best practice framework that tries to capture the requirements for successful implementation within educational institutions. We conclude by identifying some significant gaps in the technology and identify some possible future research directions.
2016-07-13T00:00:00Z
Voogt, Lennert
Dow, Lisa
Dobson, Simon Andrew
The widespread adoption of online education is severely challenged by issues of verifiability, reliability, security and credibility. Open Badges exist to address these challenges, but there is no consensus as to what constitutes best practices regarding the implementation of an Open Badge system within an educational context. In this paper we survey the current landscape of Open Badges from educational and technological perspectives. We analyze a broad set of openly-reported pilot projects and case studies, and derive a comprehensive best practice framework that tries to capture the requirements for successful implementation within educational institutions. We conclude by identifying some significant gaps in the technology and identify some possible future research directions.
All across the circle : using auto-ordering to improve object transfer between mobile devices
Li, Chengzhao
Gutwin, Carl
Stanley, Kevin
Nacenta, Miguel
https://hdl.handle.net/10023/8964
2022-04-21T14:30:50Z
2016-06-01T00:00:00Z
People frequently form small groups in many social and professional situations: from conference attendees meeting at a coffee break, to siblings gathering at a family barbecue. These ad-hoc gatherings typically form into predictable geometries based on circles or circular arcs (called F-Formations). Because our lives are increasingly stored and represented by data on handheld devices, the desire to be able to share digital objects while in these groupings has increased. Using the relative position in these groups to facilitate file sharing can enable intuitive techniques such as passing or flicking. However, there is no reliable, lightweight, ad-hoc technology for detecting and representing relative locations around a circle. In this paper, we present two systems that can auto-order locations about a circle based on sensors that are standard on commodity smartphones. We tested these systems using an object-passing task in a laboratory environment against unordered and proximity-based systems, and show that our techniques are faster, are more accurate, and are preferred by users.
2016-06-01T00:00:00Z
Li, Chengzhao
Gutwin, Carl
Stanley, Kevin
Nacenta, Miguel
People frequently form small groups in many social and professional situations: from conference attendees meeting at a coffee break, to siblings gathering at a family barbecue. These ad-hoc gatherings typically form into predictable geometries based on circles or circular arcs (called F-Formations). Because our lives are increasingly stored and represented by data on handheld devices, the desire to be able to share digital objects while in these groupings has increased. Using the relative position in these groups to facilitate file sharing can enable intuitive techniques such as passing or flicking. However, there is no reliable, lightweight, ad-hoc technology for detecting and representing relative locations around a circle. In this paper, we present two systems that can auto-order locations about a circle based on sensors that are standard on commodity smartphones. We tested these systems using an object-passing task in a laboratory environment against unordered and proximity-based systems, and show that our techniques are faster, are more accurate, and are preferred by users.
Traffic visualization - applying information visualization techniques to enhance traffic planning
Picozzi, Matteo
Verdezoto, Nervo
Pouke, Matti
Vatjus-Anttila, Jarkko
Quigley, Aaron John
https://hdl.handle.net/10023/8828
2023-04-19T00:38:41Z
2013-02-21T00:00:00Z
In this paper, we present a space-time visualization to provide city’s decision-makers the ability to analyse and uncover important “city events” in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police officers, city planners) to support our arguments.
2013-02-21T00:00:00Z
Picozzi, Matteo
Verdezoto, Nervo
Pouke, Matti
Vatjus-Anttila, Jarkko
Quigley, Aaron John
In this paper, we present a space-time visualization to provide city’s decision-makers the ability to analyse and uncover important “city events” in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police officers, city planners) to support our arguments.
Adult dental anxiety : recent assessment approaches and psychological management in a dental practice setting
Humphris, Gerald Michael
Spyt, James
Herbison, Alice
Kelsey, Tom
https://hdl.handle.net/10023/8821
2024-03-06T00:42:38Z
2016-05-01T00:00:00Z
Dental Anxiety of patients is a common feature of the everyday experience of dental practice. This article advocates the use of regular assessment of this psychological construct to assist in patient management. Various tools, such as the Modified Dental Anxiety Scale (MDAS), are available to monitor dental anxiety that are quick to complete and easy to interpret. Patient burden is low. A new mobile phone assessment system (DENTANX) is being developed for distribution. This application and other psychological interventions are being investigated to assist patients to receive dental care routinely.
2016-05-01T00:00:00Z
Humphris, Gerald Michael
Spyt, James
Herbison, Alice
Kelsey, Tom
Dental Anxiety of patients is a common feature of the everyday experience of dental practice. This article advocates the use of regular assessment of this psychological construct to assist in patient management. Various tools, such as the Modified Dental Anxiety Scale (MDAS), are available to monitor dental anxiety that are quick to complete and easy to interpret. Patient burden is low. A new mobile phone assessment system (DENTANX) is being developed for distribution. This application and other psychological interventions are being investigated to assist patients to receive dental care routinely.
A Linked Data scalability challenge : concept reuse leads to semantic decay
Pareti, Paolo
Klein, Ewan
Barker, Adam David
https://hdl.handle.net/10023/7586
2023-04-19T00:39:53Z
2015-06-28T00:00:00Z
The increasing amount of available Linked Data resources is laying the foundations for more advanced Semantic Web applications. One of their main limitations, however, remains the general low level of data quality. In this paper we focus on a measure of quality which is negatively affected by the increase of the available resources. We propose a measure of semantic richness of Linked Data concepts and we demonstrate our hypothesis that the more a concept is reused, the less semantically rich it becomes. This is a significant scalability issue, as one of the core aspects of Linked Data is the propagation of semantic information on the Web by reusing common terms. We prove our hypothesis with respect to our measure of semantic richness and we validate our model empirically. Finally, we suggest possible future directions to address this scalability problem.
2015-06-28T00:00:00Z
Pareti, Paolo
Klein, Ewan
Barker, Adam David
The increasing amount of available Linked Data resources is laying the foundations for more advanced Semantic Web applications. One of their main limitations, however, remains the general low level of data quality. In this paper we focus on a measure of quality which is negatively affected by the increase of the available resources. We propose a measure of semantic richness of Linked Data concepts and we demonstrate our hypothesis that the more a concept is reused, the less semantically rich it becomes. This is a significant scalability issue, as one of the core aspects of Linked Data is the propagation of semantic information on the Web by reusing common terms. We prove our hypothesis with respect to our measure of semantic richness and we validate our model empirically. Finally, we suggest possible future directions to address this scalability problem.
Evaluating the effects of fluid interface components on tabletop collaboration
Hinrichs, Uta
Carpendale, Sheelagh
Scott, Stacey D.
https://hdl.handle.net/10023/7396
2022-04-07T15:30:05Z
2006-05-23T00:00:00Z
Tabletop displays provide exciting opportunities to support individual and collaborative activities such as planning, organizing, and storyboarding. It has been previously suggested that continuous flow of interface items can ease information access and exploration on a tabletop workspace, yet this concept has not been adequately studied. This paper presents an exploratory user study of Interface Currents, a reconfigurable and mobile tabletop interface component that offers a controllable flow for interface items placed on its surface. Our study shows that Interface Currents supported information access and sharing on a tabletop workspace. The study findings also demonstrate that mobility, flexibility, and general adjustability of Interface Currents are important factors in providing interface support for variations in task and group interactions.
2006-05-23T00:00:00Z
Hinrichs, Uta
Carpendale, Sheelagh
Scott, Stacey D.
Tabletop displays provide exciting opportunities to support individual and collaborative activities such as planning, organizing, and storyboarding. It has been previously suggested that continuous flow of interface items can ease information access and exploration on a tabletop workspace, yet this concept has not been adequately studied. This paper presents an exploratory user study of Interface Currents, a reconfigurable and mobile tabletop interface component that offers a controllable flow for interface items placed on its surface. Our study shows that Interface Currents supported information access and sharing on a tabletop workspace. The study findings also demonstrate that mobility, flexibility, and general adjustability of Interface Currents are important factors in providing interface support for variations in task and group interactions.
Large displays in urban life : from exhibition halls to media facades
Hinrichs, Uta
Valkanova, Nina
Kuikkaniemi, Kai
Jacucci, Giulio
Carpendale, Sheelagh
Arroyo, Ernesto
https://hdl.handle.net/10023/7387
2022-04-22T09:31:00Z
2011-05-07T00:00:00Z
Recent trends show an increasing prevalence of large interactive displays in public urban life. For example, museums, libraries, public plazas, or architectural facades take advantage of interactive technologies that present information in a highly visual and interactive way. Studies confirm the potential of large interactive display installations for educating, entertaining, and providing evocative experiences. This workshop will provide a platform for researchers and practitioners from different disciplines to exchange insights on current research questions in the area. The workshop will focus on how to design large interactive display installations that promote engaging experiences that go beyond playful interaction, and how to evaluate their impact. The goal is to cross-fertilize in-sights from different disciplines, establish a more general understanding of large interactive displays in public urban contexts, and to develop an agenda for future research directions in this area.
2011-05-07T00:00:00Z
Hinrichs, Uta
Valkanova, Nina
Kuikkaniemi, Kai
Jacucci, Giulio
Carpendale, Sheelagh
Arroyo, Ernesto
Recent trends show an increasing prevalence of large interactive displays in public urban life. For example, museums, libraries, public plazas, or architectural facades take advantage of interactive technologies that present information in a highly visual and interactive way. Studies confirm the potential of large interactive display installations for educating, entertaining, and providing evocative experiences. This workshop will provide a platform for researchers and practitioners from different disciplines to exchange insights on current research questions in the area. The workshop will focus on how to design large interactive display installations that promote engaging experiences that go beyond playful interaction, and how to evaluate their impact. The goal is to cross-fertilize in-sights from different disciplines, establish a more general understanding of large interactive displays in public urban contexts, and to develop an agenda for future research directions in this area.
Perceptual and social challenges in body proximate display ecosystems
Quigley, Aaron John
Grubert, Jens
https://hdl.handle.net/10023/7315
2023-04-19T00:39:43Z
2015-08-24T00:00:00Z
Coordinated multi-display environments from the desktop, second-screen to gigapixel display walls are increasingly common. Personal and intimate display devices such as head-mounted displays, smartwatches, smartphones and tablets are rarely part of such a multi-display ecosystem. This presents an opportunity to realise “body proximate” display environments, employing on and around the body displays. These can be formed by combining multiple handheld, head-mounted, wrist-worn or other personal or appropriated displays. However, such an ecosystem encapsulating evermore interaction points, is not yet well understood. For example, does this trap the user in an “interaction bubble” even more than interaction with individual displays such as smartphones? Within this paper, we investigate the perceptual and social challenges that could inhibit the adoption and acceptance of interactive proximate display ecosystems. We conclude with a series of research questions raised in the consideration of such environments.
2015-08-24T00:00:00Z
Quigley, Aaron John
Grubert, Jens
Coordinated multi-display environments from the desktop, second-screen to gigapixel display walls are increasingly common. Personal and intimate display devices such as head-mounted displays, smartwatches, smartphones and tablets are rarely part of such a multi-display ecosystem. This presents an opportunity to realise “body proximate” display environments, employing on and around the body displays. These can be formed by combining multiple handheld, head-mounted, wrist-worn or other personal or appropriated displays. However, such an ecosystem encapsulating evermore interaction points, is not yet well understood. For example, does this trap the user in an “interaction bubble” even more than interaction with individual displays such as smartphones? Within this paper, we investigate the perceptual and social challenges that could inhibit the adoption and acceptance of interactive proximate display ecosystems. We conclude with a series of research questions raised in the consideration of such environments.
Design and technology challenges for body proximate display ecosystems
Grubert, Jens
Kranz, Matthias
Quigley, Aaron John
https://hdl.handle.net/10023/7314
2023-04-19T00:39:42Z
2015-08-24T00:00:00Z
Body proximate display environments can be formed by combining multiple hand-held, head-mounted, wrist-worn or other displays. Wearable displays such as smartwatches and smartglasses have the potential to extend the interaction capabilities of mobile users beyond a single display. However, the display ecosystem formed by multiple personal displays on and around users’ bodies is not well understood, yet. Within this paper, we investigate the design and technology challenges that could inhibit the creation and the use of interactive display ecosystems.
2015-08-24T00:00:00Z
Grubert, Jens
Kranz, Matthias
Quigley, Aaron John
Body proximate display environments can be formed by combining multiple hand-held, head-mounted, wrist-worn or other displays. Wearable displays such as smartwatches and smartglasses have the potential to extend the interaction capabilities of mobile users beyond a single display. However, the display ecosystem formed by multiple personal displays on and around users’ bodies is not well understood, yet. Within this paper, we investigate the design and technology challenges that could inhibit the creation and the use of interactive display ecosystems.
Some challenges for ethics in social network research
Hutton, Luke
Henderson, Tristan
https://hdl.handle.net/10023/7291
2023-04-19T00:39:31Z
2015-08-21T00:00:00Z
Social network sites (SNSes) comprise one of the most popular networked applications of late, with hundreds of millions of users. Collecting and analysing data from such systems creates myriad ethical issues and challenges for researchers both in networked systems and other fields, as highlighted by recent media sensitivity about research studies that have used data from Facebook. In our workshop contribution we discuss recent work that we have been carrying out in the area of responsible SNS research, revolving around themes of reproducibility, consent, incentives, and creating ethical workflows.
This work was supported by the Engineering and Physical Sciences Research Council [grant numbers EP/J500549/1, EP/M506631/1].
2015-08-21T00:00:00Z
Hutton, Luke
Henderson, Tristan
Social network sites (SNSes) comprise one of the most popular networked applications of late, with hundreds of millions of users. Collecting and analysing data from such systems creates myriad ethical issues and challenges for researchers both in networked systems and other fields, as highlighted by recent media sensitivity about research studies that have used data from Facebook. In our workshop contribution we discuss recent work that we have been carrying out in the area of responsible SNS research, revolving around themes of reproducibility, consent, incentives, and creating ethical workflows.
Executing Bag of Distributed Tasks on virtually unlimited Cloud resources
Thai, Long Thanh
Varghese, Blesson
Barker, Adam David
https://hdl.handle.net/10023/7137
2023-04-19T00:39:30Z
2015-05-20T00:00:00Z
Bag-of-Distributed-Tasks (BoDT) application is the collection of identical and independent tasks each of which requires a piece of input data located around the world. As a result, Cloud computing offers an effective way to execute BoT application as it not only consists of multiple geographically distributed data centres but also allows a user to pay for what is actually used. In this paper, BoDT on the Cloud using virtually unlimited cloud resources is investigated. To this end, a heuristic algorithm is proposed to find an execution plan that takes budget constraints into account. Compared with other approaches, for the same given budget, the proposed algorithm is able to reduce the overall execution time up to 50%.
This research is supported by the EPSRC grant ‘Working Together: Constraint Programming and Cloud Computing’ (EP/K015745/1), a Royal Society Industry Fellowship, an Impact Acceleration Account Grant (IAA) and an Amazon Web Services (AWS) Education Research Grant.
2015-05-20T00:00:00Z
Thai, Long Thanh
Varghese, Blesson
Barker, Adam David
Bag-of-Distributed-Tasks (BoDT) application is the collection of identical and independent tasks each of which requires a piece of input data located around the world. As a result, Cloud computing offers an effective way to execute BoT application as it not only consists of multiple geographically distributed data centres but also allows a user to pay for what is actually used. In this paper, BoDT on the Cloud using virtually unlimited cloud resources is investigated. To this end, a heuristic algorithm is proposed to find an execution plan that takes budget constraints into account. Compared with other approaches, for the same given budget, the proposed algorithm is able to reduce the overall execution time up to 50%.
From missions to systems : generating transparently distributable programs for sensor-oriented systems
Porter, Barry
Dearle, Alan
Dobson, Simon Andrew
https://hdl.handle.net/10023/6883
2023-04-19T00:39:37Z
2012-12-04T00:00:00Z
Early Wireless Sensor Networks aimed simply to collect as much data as possible for as long as possible. While this remains true in selected cases, the majority of future sensor network applications will demand much more intelligent use of their resources as networks increase in scale and support multiple applications and users. Specifically, we argue that a computational model is needed in which the ways that data flows through networks, and the ways in which decisions are made based on that data, is transparently distributable and relocatable as requirements evolve. In this paper we present an approach to achieving this using high-level mission specifications from which we can automatically derive transparently distributable programs.
2012-12-04T00:00:00Z
Porter, Barry
Dearle, Alan
Dobson, Simon Andrew
Early Wireless Sensor Networks aimed simply to collect as much data as possible for as long as possible. While this remains true in selected cases, the majority of future sensor network applications will demand much more intelligent use of their resources as networks increase in scale and support multiple applications and users. Specifically, we argue that a computational model is needed in which the ways that data flows through networks, and the ways in which decisions are made based on that data, is transparently distributable and relocatable as requirements evolve. In this paper we present an approach to achieving this using high-level mission specifications from which we can automatically derive transparently distributable programs.
Children’s Creativity Lab : creating a ‘pen of the future’
Mann, Anne-Marie
Hinrichs, Uta
Quigley, Aaron John
https://hdl.handle.net/10023/6880
2023-04-19T00:39:36Z
2014-11-11T00:00:00Z
Technology is changing the way we acquire new skills and proficiencies and handwriting is no exception to this. However, while some technological advancements exist in this area, the question of how we can digitally enhance the process of learning handwriting remains under-explored. Being immersed in this process on an everyday basis, we believe that school aged children can provide valuable ideas and insights into the design of future writing tools for learners developing their (hand)writing skills. As end-users of the proposed technology, we explore including children in a form of informed participatory design during a creativity lab where we invited 12 children, aged 11–12, to put themselves into the shoes of a product designers and create a Pen of the Future using prototyping materials. In this paper we describe our methodology and discuss the design ideas that children came up with and how these may inform the design of future writing tools.
This work is funded by EPSRC and SICSA.
2014-11-11T00:00:00Z
Mann, Anne-Marie
Hinrichs, Uta
Quigley, Aaron John
Technology is changing the way we acquire new skills and proficiencies and handwriting is no exception to this. However, while some technological advancements exist in this area, the question of how we can digitally enhance the process of learning handwriting remains under-explored. Being immersed in this process on an everyday basis, we believe that school aged children can provide valuable ideas and insights into the design of future writing tools for learners developing their (hand)writing skills. As end-users of the proposed technology, we explore including children in a form of informed participatory design during a creativity lab where we invited 12 children, aged 11–12, to put themselves into the shoes of a product designers and create a Pen of the Future using prototyping materials. In this paper we describe our methodology and discuss the design ideas that children came up with and how these may inform the design of future writing tools.
Self managing monitoring for highly elastic large scale Cloud deployments
Ward, Jonathan Stuart
Barker, Adam David
https://hdl.handle.net/10023/6806
2023-04-19T00:39:13Z
2014-06-23T00:00:00Z
Infrastructure as a Service computing exhibits a number of properties, which are not found in conventional server deployments. Elasticity is among the most significant of these properties which has wide reaching implications for applications deployed in cloud hosted VMs. Among the applications affected by elasticity is monitoring. In this paper we investigate the challenges of monitoring large cloud deployments and how these challenges differ from previous monitoring problems. In order to meet these unique challenges we propose Varanus, a highly scalable monitoring tool resistant to the effects of rapid elasticity. This tool breaks with many of the conventions of previous monitoring systems and leverages a multi-tier P2P architecture in order to achieve in situ monitoring without the need for dedicated monitoring infrastructure. We then evaluate Varanus against current monitoring architectures. We find that conventional monitoring tools perform acceptably for small, non changing cloud deployments. However in the case of large or highly elastic deployments current tools perform unacceptably incurring increased latencies, high load and slowed operation necessitating that a new, alternative tool be used. Further, we demonstrate that Varanus maintains low latency and low resource monitoring state propagation at scale and during during periods of high elasticity.
2014-06-23T00:00:00Z
Ward, Jonathan Stuart
Barker, Adam David
Infrastructure as a Service computing exhibits a number of properties, which are not found in conventional server deployments. Elasticity is among the most significant of these properties which has wide reaching implications for applications deployed in cloud hosted VMs. Among the applications affected by elasticity is monitoring. In this paper we investigate the challenges of monitoring large cloud deployments and how these challenges differ from previous monitoring problems. In order to meet these unique challenges we propose Varanus, a highly scalable monitoring tool resistant to the effects of rapid elasticity. This tool breaks with many of the conventions of previous monitoring systems and leverages a multi-tier P2P architecture in order to achieve in situ monitoring without the need for dedicated monitoring infrastructure. We then evaluate Varanus against current monitoring architectures. We find that conventional monitoring tools perform acceptably for small, non changing cloud deployments. However in the case of large or highly elastic deployments current tools perform unacceptably incurring increased latencies, high load and slowed operation necessitating that a new, alternative tool be used. Further, we demonstrate that Varanus maintains low latency and low resource monitoring state propagation at scale and during during periods of high elasticity.
BigExcel : a web-based framework for exploring big data in Social Sciences
Saleem, Muhammed Asif
Varghese, Blesson
Barker, Adam
https://hdl.handle.net/10023/6805
2023-07-07T09:30:06Z
2015-01-07T00:00:00Z
This paper argues that there are three fundamental challenges that need to be overcome in order to foster the adoption of big data technologies in non-computer science related disciplines: addressing issues of accessibility of such technologies for non-computer scientists, supporting the ad hoc exploration of large data sets with minimal effort and the availability of lightweight web-based frameworks for quick and easy analytics. In this paper, we address the above three challenges through the development of 'BigExcel', a three tier web-based framework for exploring big data to facilitate the management of user interactions with large data sets, the construction of queries to explore the data set and the management of the infrastructure. The feasibility of BigExcel is demonstrated through two Yahoo Sandbox datasets. The first dataset is the Yahoo Buzz Score data set we use for quantitatively predicting trending technologies and the second is the Yahoo n-gram corpus we use for qualitatively inferring the coverage of important events. A demonstration of the BigExcel framework and source code is available at http://bigdata.cs.st-andrews.ac.uk/projects/bigexcel-exploring-big-data-for-social-sciences/.
This research was pursued through an Amazon Web Services Education Research Grant. The first author was the recipient of an Erasmus Mundus scholarship.
2015-01-07T00:00:00Z
Saleem, Muhammed Asif
Varghese, Blesson
Barker, Adam
This paper argues that there are three fundamental challenges that need to be overcome in order to foster the adoption of big data technologies in non-computer science related disciplines: addressing issues of accessibility of such technologies for non-computer scientists, supporting the ad hoc exploration of large data sets with minimal effort and the availability of lightweight web-based frameworks for quick and easy analytics. In this paper, we address the above three challenges through the development of 'BigExcel', a three tier web-based framework for exploring big data to facilitate the management of user interactions with large data sets, the construction of queries to explore the data set and the management of the infrastructure. The feasibility of BigExcel is demonstrated through two Yahoo Sandbox datasets. The first dataset is the Yahoo Buzz Score data set we use for quantitatively predicting trending technologies and the second is the Yahoo n-gram corpus we use for qualitatively inferring the coverage of important events. A demonstration of the BigExcel framework and source code is available at http://bigdata.cs.st-andrews.ac.uk/projects/bigexcel-exploring-big-data-for-social-sciences/.
Making social media research reproducible
Hutton, Luke
Henderson, Tristan
https://hdl.handle.net/10023/6692
2023-04-19T00:39:22Z
2015-05-26T00:00:00Z
The huge numbers of people using social media makes online social networks an attractive source of data for researchers. But in order for the resultant huge numbers of research publications that involve social media to be credible and trusted, their methodologies, considerations of data handling and sensitivity, analysis, and so forth must be appropriately documented. We believe that one way to improve standards and practices in social media research is to encourage such research to be made reproducible, that is, to have sufficient documentation and sharing of research to allow others to either replicate or build on research results. Enabling this fundamental part of the scientific method will benefit the entire social media ecosystem, from the researchers who use data, to the people that benefit from the outcomes of research.
2015-05-26T00:00:00Z
Hutton, Luke
Henderson, Tristan
The huge numbers of people using social media makes online social networks an attractive source of data for researchers. But in order for the resultant huge numbers of research publications that involve social media to be credible and trusted, their methodologies, considerations of data handling and sensitivity, analysis, and so forth must be appropriately documented. We believe that one way to improve standards and practices in social media research is to encourage such research to be made reproducible, that is, to have sufficient documentation and sharing of research to allow others to either replicate or build on research results. Enabling this fundamental part of the scientific method will benefit the entire social media ecosystem, from the researchers who use data, to the people that benefit from the outcomes of research.
Fault detection for binary sensors in smart home environments
Ye, Juan
Stevenson, Graeme
Dobson, Simon
https://hdl.handle.net/10023/6588
2023-04-19T00:39:15Z
2015-03-23T00:00:00Z
Experiments in assisted living confirm that such systems can provide context-aware services that enable occupants to remain active and independent. They also demonstrate that abnormal sensor events hamper the correct identification of critical (and potentially life-threatening) situations, and that existing learning, estimation, and time-based approaches are inaccurate and inflexible when applied to multiple people sharing a living space. We propose a technique that integrates the semantics of sensor readings with statistical outlier detection. We evaluate the technique against four real-world datasets that include multiple individuals, and show consistent rates of anomaly detection across different environments.
2015-03-23T00:00:00Z
Ye, Juan
Stevenson, Graeme
Dobson, Simon
Experiments in assisted living confirm that such systems can provide context-aware services that enable occupants to remain active and independent. They also demonstrate that abnormal sensor events hamper the correct identification of critical (and potentially life-threatening) situations, and that existing learning, estimation, and time-based approaches are inaccurate and inflexible when applied to multiple people sharing a living space. We propose a technique that integrates the semantics of sensor readings with statistical outlier detection. We evaluate the technique against four real-world datasets that include multiple individuals, and show consistent rates of anomaly detection across different environments.
MOOCs with attitudes : Insights from a practitioner based investigation
Chadaj, Monika
Baxter, Gordon
Allison, Colin
https://hdl.handle.net/10023/6494
2023-04-19T00:39:23Z
2014-10-01T00:00:00Z
In the current educational landscape of shrinking public budgets and increasing costs, MOOCs have become one of the most dominant discourses in higher education (HE). However, due to their short history, they are only just beginning to be systematically investigated. In an attempt to shed more light on the MOOC phenomenon, this study complements other approaches by eliciting institutional attitudes to MOOC provision using qualitative content analysis on responses captured in a series of semi-structured interviews with participants who hold senior positions in universities and who are involved in creating institutional policy and/or the design and delivery of MOOCs. A context for these interviews was created by looking at MOOCs from historical, pedagogical, monetary and technological perspectives. Five topics emerged that were subsequently used as common points of reference for comparisons across the interviews: motivation, monetization, pedagogy, traditional universities and public access to higher education. The analysis of attitudes to, and the importance of, these topics are summarized, and also illustrated through quotes from the participants. Interestingly, it does not appear that MOOCs are regarded by insiders as disruptive as the media presents them, but rather are seen primarily as marketing vehicles for global education brands.
2014-10-01T00:00:00Z
Chadaj, Monika
Baxter, Gordon
Allison, Colin
In the current educational landscape of shrinking public budgets and increasing costs, MOOCs have become one of the most dominant discourses in higher education (HE). However, due to their short history, they are only just beginning to be systematically investigated. In an attempt to shed more light on the MOOC phenomenon, this study complements other approaches by eliciting institutional attitudes to MOOC provision using qualitative content analysis on responses captured in a series of semi-structured interviews with participants who hold senior positions in universities and who are involved in creating institutional policy and/or the design and delivery of MOOCs. A context for these interviews was created by looking at MOOCs from historical, pedagogical, monetary and technological perspectives. Five topics emerged that were subsequently used as common points of reference for comparisons across the interviews: motivation, monetization, pedagogy, traditional universities and public access to higher education. The analysis of attitudes to, and the importance of, these topics are summarized, and also illustrated through quotes from the participants. Interestingly, it does not appear that MOOCs are regarded by insiders as disruptive as the media presents them, but rather are seen primarily as marketing vehicles for global education brands.
Augmented learning roads for internet routing
McCaffery, John Philip
Miller, Alan Henry David
Oliver, Iain Angus
Allison, Colin
https://hdl.handle.net/10023/6493
2023-01-22T12:30:24Z
2014-10-01T00:00:00Z
As the Internet continues to establish itself as a utility, like power, transport or water, it becomes increasingly important to provide an engaging educational experience about its operation for students in related STEM disciplines such as Computer Science and Electrical Engineering. Routing is a core functionality of the global Internet. It can be used as an example of where theory meets practice, where algorithms meet protocols and where science meets engineering. Routing protocols can be included in the Computer Science curriculum in distributed systems, computer networking, algorithms, data structures, and graph theory. While there is a plethora of computer networking textbooks, and copious information of varying quality about the Internet spread across the Web, there is still an essential need for exploratory learning facilities of the type that support group work, experimentation and experiential learning. This paper reports on work using open virtual worlds to provide a multi-user interactive learning environment for Internet routing which exemplifies the capabilities of emerging immersive education technologies to augment conventional practice. The functionality of the learning environment is illustrated through examples and the underlying system which was built to support the routing simulations is explained.
2014-10-01T00:00:00Z
McCaffery, John Philip
Miller, Alan Henry David
Oliver, Iain Angus
Allison, Colin
As the Internet continues to establish itself as a utility, like power, transport or water, it becomes increasingly important to provide an engaging educational experience about its operation for students in related STEM disciplines such as Computer Science and Electrical Engineering. Routing is a core functionality of the global Internet. It can be used as an example of where theory meets practice, where algorithms meet protocols and where science meets engineering. Routing protocols can be included in the Computer Science curriculum in distributed systems, computer networking, algorithms, data structures, and graph theory. While there is a plethora of computer networking textbooks, and copious information of varying quality about the Internet spread across the Web, there is still an essential need for exploratory learning facilities of the type that support group work, experimentation and experiential learning. This paper reports on work using open virtual worlds to provide a multi-user interactive learning environment for Internet routing which exemplifies the capabilities of emerging immersive education technologies to augment conventional practice. The functionality of the learning environment is illustrated through examples and the underlying system which was built to support the routing simulations is explained.
Using online social media platforms for ubiquitous, personal health monitoring
Khorakhun, C.
Bhatti, S. N.
https://hdl.handle.net/10023/6403
2023-04-26T00:23:37Z
2014-10-15T00:00:00Z
We propose the use of an open and publicly accessible online social media platform (OSMP) as a key component for ubiquitous and personal remote health monitoring. Remote monitoring is an essential part of future mHealth systems for the delivery of personal healthcare allowing the collection of personal bio-data outside clinical environments. Previous mHealth projects focused on building private and custom platforms using closed architectures, which have a high cost for implementation, take a long time to develop, and may provide limited access and usability. By exploiting existing and publicly accessible infrastructure using an OSMP, initial costs can be reduced, at the same time as allowing fast and flexible application development at scale, whilst presenting users with interfaces and interactions that they are familiar with. We survey and discuss suitability of OSMPs in terms of functionality, performance and the key challenge in ensuring appropriate levels of security and privacy.
Date of Acceptance: 29/08/2014
2014-10-15T00:00:00Z
Khorakhun, C.
Bhatti, S. N.
We propose the use of an open and publicly accessible online social media platform (OSMP) as a key component for ubiquitous and personal remote health monitoring. Remote monitoring is an essential part of future mHealth systems for the delivery of personal healthcare allowing the collection of personal bio-data outside clinical environments. Previous mHealth projects focused on building private and custom platforms using closed architectures, which have a high cost for implementation, take a long time to develop, and may provide limited access and usability. By exploiting existing and publicly accessible infrastructure using an OSMP, initial costs can be reduced, at the same time as allowing fast and flexible application development at scale, whilst presenting users with interfaces and interactions that they are familiar with. We survey and discuss suitability of OSMPs in terms of functionality, performance and the key challenge in ensuring appropriate levels of security and privacy.
Wellbeing as a proxy for a mHealth study
Khorakhun, C.
Bhatti, S. N.
https://hdl.handle.net/10023/6399
2023-04-19T00:39:07Z
2014-11-02T00:00:00Z
The quantified-self is a key enabler for mHealth. We propose that a wellbeing remote monitoring scenario can act as a suitable proxy for mHealth monitoring by the use of an online social network (OSN). We justify our position by discussing the parallelism in the scenario between purpose-driven wellbeing and mHealth scenarios. The similarity between these two scenarios in terms of privacy and data sharing is discussed. By using such a proxy, some of the legal and ethical complexity can be removed from experimentation on new technologies and systems for mHealth. This enables technology researchers to carry out investigation and focus on testing new technologies, system interactions as well as security and privacy in healthcare in pre- clinical experiments, without loss of context. The analogy between two purpose-driven scenarios, i.e. fitness monitoring in wellbeing scenario and remote monitoring in mHealth, is discussed in terms of a practical example: we present a prototype using a wellbeing device -- Fitbit -- and an open source online social media platform (OSMP) -- Diaspora.
Date of Acceptance: 21/09/2014
2014-11-02T00:00:00Z
Khorakhun, C.
Bhatti, S. N.
The quantified-self is a key enabler for mHealth. We propose that a wellbeing remote monitoring scenario can act as a suitable proxy for mHealth monitoring by the use of an online social network (OSN). We justify our position by discussing the parallelism in the scenario between purpose-driven wellbeing and mHealth scenarios. The similarity between these two scenarios in terms of privacy and data sharing is discussed. By using such a proxy, some of the legal and ethical complexity can be removed from experimentation on new technologies and systems for mHealth. This enables technology researchers to carry out investigation and focus on testing new technologies, system interactions as well as security and privacy in healthcare in pre- clinical experiments, without loss of context. The analogy between two purpose-driven scenarios, i.e. fitness monitoring in wellbeing scenario and remote monitoring in mHealth, is discussed in terms of a practical example: we present a prototype using a wellbeing device -- Fitbit -- and an open source online social media platform (OSMP) -- Diaspora.
Mapping parallel programs to heterogeneous CPU/GPU architectures using a Monte Carlo Tree Search
Goli, Mehdi
McCall, John
Brown, Christopher Mark
Janjic, Vladimir
Hammond, Kevin
https://hdl.handle.net/10023/6157
2022-04-13T14:30:10Z
2013-06-20T00:00:00Z
The single core processor, which has dominated for over 30 years, is now obsolete with recent trends increasing towards parallel systems, demanding a huge shift in programming techniques and practices. Moreover, we are rapidly moving towards an age where almost all programming will be targeting parallel systems. Parallel hardware is rapidly evolving, with large heterogeneous systems, typically comprising a mixture of CPUs and GPUs, becoming the mainstream. Additionally, with this increasing heterogeneity comes increasing complexity: not only does the programmer have to worry about where and how to express the parallelism, they must also express an efficient mapping of resources to the available system. This generally requires in-depth expert knowledge that most application programmers do not have. In this paper we describe a new technique that derives, automatically, optimal mappings for an application onto a heterogeneous architecture, using a Monte Carlo Tree Search algorithm. Our technique exploits high-level design patterns, targeting a set of well-specified parallel skeletons. We demonstrate that our MCTS on a convolution example obtained speedups that are within 5% of the speedups achieved by a hand-tuned version of the same application.
2013-06-20T00:00:00Z
Goli, Mehdi
McCall, John
Brown, Christopher Mark
Janjic, Vladimir
Hammond, Kevin
The single core processor, which has dominated for over 30 years, is now obsolete with recent trends increasing towards parallel systems, demanding a huge shift in programming techniques and practices. Moreover, we are rapidly moving towards an age where almost all programming will be targeting parallel systems. Parallel hardware is rapidly evolving, with large heterogeneous systems, typically comprising a mixture of CPUs and GPUs, becoming the mainstream. Additionally, with this increasing heterogeneity comes increasing complexity: not only does the programmer have to worry about where and how to express the parallelism, they must also express an efficient mapping of resources to the available system. This generally requires in-depth expert knowledge that most application programmers do not have. In this paper we describe a new technique that derives, automatically, optimal mappings for an application onto a heterogeneous architecture, using a Monte Carlo Tree Search algorithm. Our technique exploits high-level design patterns, targeting a set of well-specified parallel skeletons. We demonstrate that our MCTS on a convolution example obtained speedups that are within 5% of the speedups achieved by a hand-tuned version of the same application.
Workflow partitioning and deployment on the cloud using Orchestra
Jaradat, Ward
Dearle, Alan
Barker, Adam
https://hdl.handle.net/10023/6104
2023-04-19T00:39:12Z
2015-02-15T00:00:00Z
Orchestrating service-oriented workflows is typically based on a design model that routes both data and control through a single point -- the centralised workflow engine. This causes scalability problems that include the unnecessary consumption of the network bandwidth, high latency in transmitting data between the services, and performance bottlenecks. These problems are highly prominent when orchestrating workflows that are composed from services dispersed across distant geographical locations. This paper presents a novel workflow partitioning approach, which attempts to improve the scalability of orchestrating large-scale workflows. It permits the workflow computation to be moved towards the services providing the data in order to garner optimal performance results. This is achieved by decomposing the workflow into smaller sub workflows for parallel execution, and determining the most appropriate network locations to which these sub workflows are transmitted and subsequently executed. This paper demonstrates the efficiency of our approach using a set of experimental workflows that are orchestrated over Amazon EC2 and across several geographic network regions.
2015-02-15T00:00:00Z
Jaradat, Ward
Dearle, Alan
Barker, Adam
Orchestrating service-oriented workflows is typically based on a design model that routes both data and control through a single point -- the centralised workflow engine. This causes scalability problems that include the unnecessary consumption of the network bandwidth, high latency in transmitting data between the services, and performance bottlenecks. These problems are highly prominent when orchestrating workflows that are composed from services dispersed across distant geographical locations. This paper presents a novel workflow partitioning approach, which attempts to improve the scalability of orchestrating large-scale workflows. It permits the workflow computation to be moved towards the services providing the data in order to garner optimal performance results. This is achieved by decomposing the workflow into smaller sub workflows for parallel execution, and determining the most appropriate network locations to which these sub workflows are transmitted and subsequently executed. This paper demonstrates the efficiency of our approach using a set of experimental workflows that are orchestrated over Amazon EC2 and across several geographic network regions.
RBioCloud : a light-weight framework for bioconductor and R-based jobs on the Cloud
Varghese, Blesson
Patel, Ishan
Barker, Adam David
https://hdl.handle.net/10023/6061
2023-04-18T09:56:54Z
2014-10-02T00:00:00Z
Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While Biologists and Bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as ‘RBioCloud’, is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.
2014-10-02T00:00:00Z
Varghese, Blesson
Patel, Ishan
Barker, Adam David
Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While Biologists and Bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as ‘RBioCloud’, is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.
Data citation practices in the CRAWDAD wireless network data archive
Henderson, Tristan
Kotz, David
https://hdl.handle.net/10023/6038
2022-04-20T14:30:22Z
2015-01-01T00:00:00Z
CRAWDAD (Community Resource for Archiving Wireless Data At Dartmouth) is a popular research data archive for wireless network data, archiving over 100 datasets used by over 6,500 users. In this paper we examine citation behaviour amongst 1,281 papers that use CRAWDAD datasets. We find that (in general) paper authors cite datasets in a manner that is sufficient for providing credit to dataset authors and also provides access to the datasets that were used. Only 11.5% of papers did not do so; common problems included (1) citing the canonical papers rather than the dataset, (2) describing the dataset using unclear identifiers, and (3) not providing URLs or pointers to datasets.
We are thankful for the generous support of our current funders ACM SIGCOMM and ACM SIGMOBILE, and our past funders Aruba Networks, Intel and the National Science Foundation.
2015-01-01T00:00:00Z
Henderson, Tristan
Kotz, David
CRAWDAD (Community Resource for Archiving Wireless Data At Dartmouth) is a popular research data archive for wireless network data, archiving over 100 datasets used by over 6,500 users. In this paper we examine citation behaviour amongst 1,281 papers that use CRAWDAD datasets. We find that (in general) paper authors cite datasets in a manner that is sufficient for providing credit to dataset authors and also provides access to the datasets that were used. Only 11.5% of papers did not do so; common problems included (1) citing the canonical papers rather than the dataset, (2) describing the dataset using unclear identifiers, and (3) not providing URLs or pointers to datasets.
Designing the Unexpected : Endlessly Fascinating Interaction for Interactive Installations
MacDonald, Lindsay
Brosz, John
Nacenta, Miguel
Carpendale, Sheelagh
https://hdl.handle.net/10023/6001
2022-04-22T09:30:58Z
2015-01-15T00:00:00Z
We present A Delicate Agreement, an interactive art installation designed to intrigue viewers by offering them an unfolding story that is endlessly fascinating. To achieve this, we set our story in the liminal space of an elevator, and populated this elevator with a set of unique characters. Viewers watch the story unfold through peepholes in the elevator’s doors, where in turn their gaze can trigger changes in the storyline. This storyline’s interactive response was created via a complex adaptive system using simple rules based on Goffman’s performance theory.
This research was supported in part by SSHRC, NSERC, SMART Technologies, AITF, SurfNet and GRAND.
2015-01-15T00:00:00Z
MacDonald, Lindsay
Brosz, John
Nacenta, Miguel
Carpendale, Sheelagh
We present A Delicate Agreement, an interactive art installation designed to intrigue viewers by offering them an unfolding story that is endlessly fascinating. To achieve this, we set our story in the liminal space of an elevator, and populated this elevator with a set of unique characters. Viewers watch the story unfold through peepholes in the elevator’s doors, where in turn their gaze can trigger changes in the storyline. This storyline’s interactive response was created via a complex adaptive system using simple rules based on Goffman’s performance theory.
Repeating history : execution replay for Parallel Haskell programs
Ferrerio, Henrique
Janjic, Vladimir
Castro, Laura
Hammond, Kevin
https://hdl.handle.net/10023/5895
2023-04-19T00:38:26Z
2013-01-01T00:00:00Z
Parallel profiling tools, such as ThreadScope for Parallel Haskell, allow programmers to obtain information about the performance of their parallel programs. However, the information they provide is not always sufficiently detailed to precisely pinpoint the cause of some per- formance problems. Often, this is because the cost of obtaining that information would be prohibitive for a complete program execution. In this paper, we adapt the well-known technique of execution replay to make it possible to simulate a previous run of a program. We ensure that the non-deterministic parallel behaviour of the application is prop- erly emulated while the deterministic functional code is run unmodified. In this way, we can gather additional data about the behaviour of a par- allel program by replaying some parts of it with more detailed profiling information. We exploit this ability to identify performance bottlenecks in a quicksort implementation, and to derive a version that gives better speedups on multicore machines.
2013-01-01T00:00:00Z
Ferrerio, Henrique
Janjic, Vladimir
Castro, Laura
Hammond, Kevin
Parallel profiling tools, such as ThreadScope for Parallel Haskell, allow programmers to obtain information about the performance of their parallel programs. However, the information they provide is not always sufficiently detailed to precisely pinpoint the cause of some per- formance problems. Often, this is because the cost of obtaining that information would be prohibitive for a complete program execution. In this paper, we adapt the well-known technique of execution replay to make it possible to simulate a previous run of a program. We ensure that the non-deterministic parallel behaviour of the application is prop- erly emulated while the deterministic functional code is run unmodified. In this way, we can gather additional data about the behaviour of a par- allel program by replaying some parts of it with more detailed profiling information. We exploit this ability to identify performance bottlenecks in a quicksort implementation, and to derive a version that gives better speedups on multicore machines.
The Haptic Touch toolkit : enabling exploration of haptic interactions
Ledo, David
Nacenta, Miguel
Marquardt, Nicolai
Boring, Sebastian
Greenberg, Saul
https://hdl.handle.net/10023/5555
2022-04-22T09:30:56Z
2012-02-19T00:00:00Z
In the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the Haptictouch toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.
This work is partially funded by the AITF/NSERC/SMART Chair in Interactive Technologies, Alberta Innovates Tech. Futures, NSERC, and SMART Technologies.
2012-02-19T00:00:00Z
Ledo, David
Nacenta, Miguel
Marquardt, Nicolai
Boring, Sebastian
Greenberg, Saul
In the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the Haptictouch toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.
Self-management of self-organising mobile computing applications : a separation of concerns approach
Fernandez Marquez, Jose Luis
di Marzo Serugendo, Giovanna
Stevenson, Graeme Turnbull
Ye, Juan
Dobson, Simon Andrew
Zamonelli, Franco
https://hdl.handle.net/10023/5538
2024-03-03T00:38:22Z
2014-03-24T00:00:00Z
Although the research area of self-organising systems is well established, their construction is often ad hoc. Consequently, such software is difficult reuse across applications that require similar functionality of have similar goals. The development of self-organising applications and, a fortiori, self-organising mobile applications is therefore limited to developers who are experts in specific self-organising mechanisms. As a first step towards addressing this, this paper discusses the notion of self-organising mechanisms provided as services for building higher level functionality in a modular way. This eases reuse and thus provides separation of concerns. Additionally, because of the dynamic and heterogeneous nature of mobile networks, services need to adapt themselves in order to ensure both functional and non-functional requirements. This paper discusses whether the self-management of self-organising mobile applications can be achieved in a modular fashion, via the self-management of low level self-organising services it employs, rather than considering the management of the complex system as a whole. We empirically investigate two non-functional aspects: resource optimisation and accuracy.
2014-03-24T00:00:00Z
Fernandez Marquez, Jose Luis
di Marzo Serugendo, Giovanna
Stevenson, Graeme Turnbull
Ye, Juan
Dobson, Simon Andrew
Zamonelli, Franco
Although the research area of self-organising systems is well established, their construction is often ad hoc. Consequently, such software is difficult reuse across applications that require similar functionality of have similar goals. The development of self-organising applications and, a fortiori, self-organising mobile applications is therefore limited to developers who are experts in specific self-organising mechanisms. As a first step towards addressing this, this paper discusses the notion of self-organising mechanisms provided as services for building higher level functionality in a modular way. This eases reuse and thus provides separation of concerns. Additionally, because of the dynamic and heterogeneous nature of mobile networks, services need to adapt themselves in order to ensure both functional and non-functional requirements. This paper discusses whether the self-management of self-organising mobile applications can be achieved in a modular fashion, via the self-management of low level self-organising services it employs, rather than considering the management of the complex system as a whole. We empirically investigate two non-functional aspects: resource optimisation and accuracy.
The cost of virtue : reward as well as feedback are required to reduce user ICT power consumption
Yu, Yi
Bhatti, Saleem N.
https://hdl.handle.net/10023/5378
2023-04-26T00:23:31Z
2014-06-11T00:00:00Z
We show that students in a school lab environment will change their behaviour to be more energy efficient, when appropriate incentives are in place, and when measurement-based, real-time feedback about their energy usage is provided. Rewards incentivise `non-green' users to be `green' as well as encouraging those users who already claim to be `green'. Measurement-based feedback improves user energy awareness and helps users to explore and adjust their use of computers to become `greener', but is not sufficient by itself. In our measurements, weekly mean group energy use as a whole reduced by up to 16%; and weekly individual user energy consumption reduced by up to 56% during active use. The findings are drawn from our longitudinal study that involved 83 Computer Science students; lasted 48 weeks across 2 academic years; monitored a total of 26778 hours of active computer use; collected approximately 2TB of raw data.
This work was partly supported by the IU-AC project, funded by grant EP/J016756/1 from the Engineering and Physical Sciences Research Council (EPSRC).
2014-06-11T00:00:00Z
Yu, Yi
Bhatti, Saleem N.
We show that students in a school lab environment will change their behaviour to be more energy efficient, when appropriate incentives are in place, and when measurement-based, real-time feedback about their energy usage is provided. Rewards incentivise `non-green' users to be `green' as well as encouraging those users who already claim to be `green'. Measurement-based feedback improves user energy awareness and helps users to explore and adjust their use of computers to become `greener', but is not sufficient by itself. In our measurements, weekly mean group energy use as a whole reduced by up to 16%; and weekly individual user energy consumption reduced by up to 56% during active use. The findings are drawn from our longitudinal study that involved 83 Computer Science students; lasted 48 weeks across 2 academic years; monitored a total of 26778 hours of active computer use; collected approximately 2TB of raw data.
An elastic virtual infrastructure for research applications (ELVIRA)
Voss, Alexander
Barker, Adam David
Asgari-Targhi, Mahboubeh
van Ballegooijen, Adriaan
Sommerville, Ian
https://hdl.handle.net/10023/4870
2023-04-18T09:52:05Z
2013-11-25T00:00:00Z
Cloud computing infrastructures provide a way for researchers to source the computational and storage resources they require to conduct their work and to collaborate within distributed research teams. We provide an overview of a cloud-based elastic virtual infrastructure for research applications that we have established to provide researchers with a collaborative research environment that automatically allocates cloud resources as required. We describe how we have used this infrastructure to support research on the Sun’s corona and how the elasticity provided by cloud infrastructures can be leveraged to provide high-throughput computing resources using a set of off-the-shelf technologies and a small number of additional tools that are simple to deploy and use. The resulting infrastructure has a number of advantages for the researchers compared to traditional clusters or grid computing environments that we discuss in the conclusions.
2013-11-25T00:00:00Z
Voss, Alexander
Barker, Adam David
Asgari-Targhi, Mahboubeh
van Ballegooijen, Adriaan
Sommerville, Ian
Cloud computing infrastructures provide a way for researchers to source the computational and storage resources they require to conduct their work and to collaborate within distributed research teams. We provide an overview of a cloud-based elastic virtual infrastructure for research applications that we have established to provide researchers with a collaborative research environment that automatically allocates cloud resources as required. We describe how we have used this infrastructure to support research on the Sun’s corona and how the elasticity provided by cloud infrastructures can be leveraged to provide high-throughput computing resources using a set of off-the-shelf technologies and a small number of additional tools that are simple to deploy and use. The resulting infrastructure has a number of advantages for the researchers compared to traditional clusters or grid computing environments that we discuss in the conclusions.
Virtual worlds, real traffic : interaction and adaptation
Oliver, Iain Angus
Miller, Alan Henry David
Allison, Colin
https://hdl.handle.net/10023/4448
2023-01-22T12:30:05Z
2010-01-01T00:00:00Z
Metaverses such as Second Life (SL) are a relatively new type of Internet application. Their functionality is similar to online 3D games but differs in that users are able to construct the environment their avatars inhabit and are not constrained by predefined goals. From the network perspective metaverses are similar to games in that timeliness is important but differ in that their traffic is much less regular and requires more bandwidth This paper contributes to our understanding of metaverse traffic by validating previous studies and offering new insights. In particular we analyse the relationships between application functionality, SL's traffic control system and the wider network environment. Two sets of studies have been carried out: one of the traffic generated by a hands-on workshop which used SL; and a follow up set of controlled experiments to clarify some of the findings from the first study. The interplay between network latency, SL's traffic throttle settings, avatar density, and the errors in the client's estimation of avatar positions are demonstrated. These insights are of particular interest to those designing traffic management schemes for metaverses and help explain some of the oddities in the current user experience.
Proceeding MMSys '10 Proceedings of the first annual ACM SIGMM conference on Multimedia systems
2010-01-01T00:00:00Z
Oliver, Iain Angus
Miller, Alan Henry David
Allison, Colin
Metaverses such as Second Life (SL) are a relatively new type of Internet application. Their functionality is similar to online 3D games but differs in that users are able to construct the environment their avatars inhabit and are not constrained by predefined goals. From the network perspective metaverses are similar to games in that timeliness is important but differ in that their traffic is much less regular and requires more bandwidth This paper contributes to our understanding of metaverse traffic by validating previous studies and offering new insights. In particular we analyse the relationships between application functionality, SL's traffic control system and the wider network environment. Two sets of studies have been carried out: one of the traffic generated by a hands-on workshop which used SL; and a follow up set of controlled experiments to clarify some of the findings from the first study. The interplay between network latency, SL's traffic throttle settings, avatar density, and the errors in the client's estimation of avatar positions are demonstrated. These insights are of particular interest to those designing traffic management schemes for metaverses and help explain some of the oddities in the current user experience.
Managing humanitarian emergencies : Teaching and learning with a virtual humanitarian disaster tool
Ajinomoh, Olatokunbo
Miller, Alan Henry David
Dow, Lisa
Gordon-Gibson, Alasdair Norman Stewart
Burt, Eleanor
https://hdl.handle.net/10023/4201
2023-01-22T12:30:09Z
2012-01-01T00:00:00Z
The importance of specialist intervention in the form of humanitarian aid from governments, NGOs and other aid agencies during a humanitarian emergency cannot be over-emphasised. Humanitarian aid is the assistance provided in response to a humanitarian crisis. Humanitarian aid may be logistical, financial or material and its central aim is to alleviate human suffering and save lives. This paper describes an inter-disciplinary project that created the Virtual Humanitarian Disaster learning and teaching resource (VHD) that is centred on the events occurring in the aftermath of an earthquake. To facilitate learning, scenarios with integrated task dilemmas have been modelled which will provide the opportunity for users of the resource to explore the inter-relationships between the key areas of activities which are important to the NGOs and other bodies which deliver humanitarian aid. Such areas include geo-political relationships, legal and regulatory requirements, information management, logistic, financial and human resource management imperatives. The VHD is primarily aimed at students. It creates a more flexible learning and teaching environment when compared with traditional classroom methods. The resource enables students to make decisions concerning critical situations within the controlled environment of a virtual world, where the consequences of any wrong decisions, will not directly impact on lives and property. The VHD has been embedded within an undergraduate module of the School of Management as it specifically relates to the final thematic area within which the module engages, namely the strategic and operational challenges faced by NGOs operating in the “humanitarian relief industry”. We demonstrate that virtual worlds can be used to enhance learning and make it more engaging. The VHD affords students the opportunity to explore given scenarios in accordance with a specified budget and in so doing, they realise module outcomes in a more active and authentic learning environment.
The project received start-up funding in the form of a University of St Andrews FILTA award
2012-01-01T00:00:00Z
Ajinomoh, Olatokunbo
Miller, Alan Henry David
Dow, Lisa
Gordon-Gibson, Alasdair Norman Stewart
Burt, Eleanor
The importance of specialist intervention in the form of humanitarian aid from governments, NGOs and other aid agencies during a humanitarian emergency cannot be over-emphasised. Humanitarian aid is the assistance provided in response to a humanitarian crisis. Humanitarian aid may be logistical, financial or material and its central aim is to alleviate human suffering and save lives. This paper describes an inter-disciplinary project that created the Virtual Humanitarian Disaster learning and teaching resource (VHD) that is centred on the events occurring in the aftermath of an earthquake. To facilitate learning, scenarios with integrated task dilemmas have been modelled which will provide the opportunity for users of the resource to explore the inter-relationships between the key areas of activities which are important to the NGOs and other bodies which deliver humanitarian aid. Such areas include geo-political relationships, legal and regulatory requirements, information management, logistic, financial and human resource management imperatives. The VHD is primarily aimed at students. It creates a more flexible learning and teaching environment when compared with traditional classroom methods. The resource enables students to make decisions concerning critical situations within the controlled environment of a virtual world, where the consequences of any wrong decisions, will not directly impact on lives and property. The VHD has been embedded within an undergraduate module of the School of Management as it specifically relates to the final thematic area within which the module engages, namely the strategic and operational challenges faced by NGOs operating in the “humanitarian relief industry”. We demonstrate that virtual worlds can be used to enhance learning and make it more engaging. The VHD affords students the opportunity to explore given scenarios in accordance with a specified budget and in so doing, they realise module outcomes in a more active and authentic learning environment.
Virtual machines for virtual worlds
Sanatinia, Amirali
Oliver, Iain Angus
Miller, Alan Henry David
Allison, Colin
https://hdl.handle.net/10023/4200
2023-01-22T12:30:08Z
2012-01-01T00:00:00Z
Multi User Virtual Worlds provide a simulated immersive 3D environment that is similar to the real world. Popular examples include Second Life and OpenSim. The multi-user nature of these simulations means that there are significant computational demands on the processes that render the different avatar-centric views of the world for each participant, which change with every movement or interaction each participant makes. Maintaining quality of experience can be difficult when the density of avatars within the same area suddenly grows beyond a relatively small number. As such virtual worlds have a dynamic resource-on-demand need that could conceivably be met by Cloud technologies. In this paper we make a start to assessing the feasibility of using the Cloud for virtual worlds by measuring the performance of virtual worlds in virtual machines of the type used for Clouds. A suitable benchmark is researched and formulated and the construction of a test-bed for carrying out load experiments is described. The system is then used to evaluate the performance of virtual worlds running in virtual machines. The results are presented and analysed before presenting the design of a system that we have built for managing virtual worlds in the cloud.
2012-01-01T00:00:00Z
Sanatinia, Amirali
Oliver, Iain Angus
Miller, Alan Henry David
Allison, Colin
Multi User Virtual Worlds provide a simulated immersive 3D environment that is similar to the real world. Popular examples include Second Life and OpenSim. The multi-user nature of these simulations means that there are significant computational demands on the processes that render the different avatar-centric views of the world for each participant, which change with every movement or interaction each participant makes. Maintaining quality of experience can be difficult when the density of avatars within the same area suddenly grows beyond a relatively small number. As such virtual worlds have a dynamic resource-on-demand need that could conceivably be met by Cloud technologies. In this paper we make a start to assessing the feasibility of using the Cloud for virtual worlds by measuring the performance of virtual worlds in virtual machines of the type used for Clouds. A suitable benchmark is researched and formulated and the construction of a test-bed for carrying out load experiments is described. The system is then used to evaluate the performance of virtual worlds running in virtual machines. The results are presented and analysed before presenting the design of a system that we have built for managing virtual worlds in the cloud.
Exploring heritage through time and space : Supporting community reflection on the highland clearances
McCaffery, John Philip
Miller, Alan Henry David
Kennedy, Sarah Elizabeth
Dawson, Tom
Vermehren, Anna
Lefley, C
Strickland, K
https://hdl.handle.net/10023/4192
2024-02-25T00:38:06Z
2013-10-01T00:00:00Z
On the two hundredth anniversary of the Kildonan clearances, when people were forcibly removed from their homes, the Timespan Heritage centre has created a program of community centred work aimed at challenging pre conceptions and encouraging reflection on this important historical process. This paper explores the innovative ways in which virtual world technology has facilitated community engagement, enhanced visualisation and encouraged reflection as part of this program. An installation where users navigate through a reconstruction of pre clearance Caen township is controlled through natural gestures and presented on a 300 inch six megapixel screen. This environment allows users to experience the past in new ways. The platform has value as an effective way for an educator, artist or hobbyist to create large scale virtual environments using off the shelf hardware and open source software. The result is an exhibit that also serves as a platform for experimentation into innovative ways of community co-creation and co-curation.
2013-10-01T00:00:00Z
McCaffery, John Philip
Miller, Alan Henry David
Kennedy, Sarah Elizabeth
Dawson, Tom
Vermehren, Anna
Lefley, C
Strickland, K
On the two hundredth anniversary of the Kildonan clearances, when people were forcibly removed from their homes, the Timespan Heritage centre has created a program of community centred work aimed at challenging pre conceptions and encouraging reflection on this important historical process. This paper explores the innovative ways in which virtual world technology has facilitated community engagement, enhanced visualisation and encouraged reflection as part of this program. An installation where users navigate through a reconstruction of pre clearance Caen township is controlled through natural gestures and presented on a 300 inch six megapixel screen. This environment allows users to experience the past in new ways. The platform has value as an effective way for an educator, artist or hobbyist to create large scale virtual environments using off the shelf hardware and open source software. The result is an exhibit that also serves as a platform for experimentation into innovative ways of community co-creation and co-curation.
Mobile cross reality for cultural heritage
Davies, Christopher John
Miller, Alan Henry David
Allison, Colin
https://hdl.handle.net/10023/4190
2023-01-22T12:30:16Z
2013-10-01T00:00:00Z
Widespread adoption of smartphones and tablets has enabled people to multiplex their physical reality, where they engage in face-to-face social interaction, with Web-based social networks and apps, whilst emerging 3D Web technologies hold promise for networks of parallel 3D virtual environments to emerge. Although current technologies allow this multiplexing of physical reality and 2D Web, in a situation called PolySocial Reality, the same cannot yet be achieved with 3D content. Cross Reality was proposed to address this issue; however so far it has focused on the use of fixed links between physical and virtual environments in closed lab settings, limiting investigation of the explorative and social aspects. This paper presents an architecture and implementation that addresses these shortcomings using a tablet computer and the Pangolin virtual world viewer to provide a mobile interface to a corresponding 3D virtual environment. Motivation for this project stemmed from a desire to enable students to interact with existing virtual reconstructions of cultural heritage sites in tandem with exploration of the corresponding real locations, avoiding the adverse temporal separation caused otherwise by interacting with the virtual content only within the classroom. The accuracy of GPS tracking emerged as a constraint on this style of interaction.
2013-10-01T00:00:00Z
Davies, Christopher John
Miller, Alan Henry David
Allison, Colin
Widespread adoption of smartphones and tablets has enabled people to multiplex their physical reality, where they engage in face-to-face social interaction, with Web-based social networks and apps, whilst emerging 3D Web technologies hold promise for networks of parallel 3D virtual environments to emerge. Although current technologies allow this multiplexing of physical reality and 2D Web, in a situation called PolySocial Reality, the same cannot yet be achieved with 3D content. Cross Reality was proposed to address this issue; however so far it has focused on the use of fixed links between physical and virtual environments in closed lab settings, limiting investigation of the explorative and social aspects. This paper presents an architecture and implementation that addresses these shortcomings using a tablet computer and the Pangolin virtual world viewer to provide a mobile interface to a corresponding 3D virtual environment. Motivation for this project stemmed from a desire to enable students to interact with existing virtual reconstructions of cultural heritage sites in tandem with exploration of the corresponding real locations, avoiding the adverse temporal separation caused otherwise by interacting with the virtual content only within the classroom. The accuracy of GPS tracking emerged as a constraint on this style of interaction.
Open Virtual Worlds : A serious platform for experiential and game based learning
Miller, Alan Henry David
Allison, Colin
Getchell, Kristoffer Marc
https://hdl.handle.net/10023/4182
2023-01-22T12:30:21Z
2012-01-01T00:00:00Z
This paper presents our experiences of and reflections on five years work in using virtual worlds to support exploratory learning across a range of disciplines and educational contexts. Both educational and systems aspects are considered. Experiential learning enriches education by allowing exploration of a subject. However, often barriers of time, place, cost or scale make it difficult to conduct real world experiential learning. This paper presents experiences in utilizing virtual worlds to support experiential learning in the arts, humanities and sciences. The work presented here draws upon several years of experience in designing, developing and deploying Virtual World applications, which address the concrete needs of specific subject areas in a range of educational contexts. The work was motivated by the observation that 3D educational environments could leverage digital literacy developed playing console games and to provide an engaging learning experience where users navigate a virtual environment much as they would the real world. Furthermore developments in computer hardware and networking mean that 3D applications run on standard computers found in offices and educational institutions. The Initial application developed was a simulation of an archeological excavation. Prototypes were developed in a First Person Shooter game, a Virtual Reality environment and a Virtual World. We found that Virtual World technology offered users presence through the proxy of avatars and powerfull support for shaping and programming the environment. Initially a simulation of an archeological dig, a virtual teaching space for a management course, a virtual laboratory for wireless networking and a lab for exploring Human Computer Interaction were developed on a Second Life island. The experience was positive and students engaged in valuable learning activities that would not otherwise have been possible. However, we came up against constraints, that were not inherent in Virtual Worlds per se, but rather flowed from Second Life's service model. This lead us to migrate our development platform from Second Life to OpenSim. The ability for institutions to manage their own virtual world servers offers benefits in the areas of content creation, application development, cost and scalability. However in providing a virtual world service a number of challenges arise, which must be met if the potential of educational virtual worlds is to be realised. These challenges lie in the realms of application design, support for resource creation, and system support. The power of Open Virtual Worlds is illustrated here by presenting three exemplar applications developed on OpenSim. These are a virtual laboratory for experimenting with Internet routing protocols, a reconstruction of Scotland largest and most important religious building, St Andrews Cathedral and a tool for learning about intervention in humanitarian disasters. A number of subjects and educational contexts are considered: contexts include PhD and masters research projects, laboratory sessions as part of accredited degree programs, open days for aspiring entrants, an exhibition held in a science center attended by the “interested public”, parties of primary school students with their teachers as well as scouts and cubs on a days expedition. Subjects areas include, computer science, archeology, art history, history and management. Taken together this work demonstrates the power of virtual worlds as a platform for developing 3D applications that support heterogeneous exploratory learning. There are still challenges to be met for the potential to be realised but the potential is considerable.
2012-01-01T00:00:00Z
Miller, Alan Henry David
Allison, Colin
Getchell, Kristoffer Marc
This paper presents our experiences of and reflections on five years work in using virtual worlds to support exploratory learning across a range of disciplines and educational contexts. Both educational and systems aspects are considered. Experiential learning enriches education by allowing exploration of a subject. However, often barriers of time, place, cost or scale make it difficult to conduct real world experiential learning. This paper presents experiences in utilizing virtual worlds to support experiential learning in the arts, humanities and sciences. The work presented here draws upon several years of experience in designing, developing and deploying Virtual World applications, which address the concrete needs of specific subject areas in a range of educational contexts. The work was motivated by the observation that 3D educational environments could leverage digital literacy developed playing console games and to provide an engaging learning experience where users navigate a virtual environment much as they would the real world. Furthermore developments in computer hardware and networking mean that 3D applications run on standard computers found in offices and educational institutions. The Initial application developed was a simulation of an archeological excavation. Prototypes were developed in a First Person Shooter game, a Virtual Reality environment and a Virtual World. We found that Virtual World technology offered users presence through the proxy of avatars and powerfull support for shaping and programming the environment. Initially a simulation of an archeological dig, a virtual teaching space for a management course, a virtual laboratory for wireless networking and a lab for exploring Human Computer Interaction were developed on a Second Life island. The experience was positive and students engaged in valuable learning activities that would not otherwise have been possible. However, we came up against constraints, that were not inherent in Virtual Worlds per se, but rather flowed from Second Life's service model. This lead us to migrate our development platform from Second Life to OpenSim. The ability for institutions to manage their own virtual world servers offers benefits in the areas of content creation, application development, cost and scalability. However in providing a virtual world service a number of challenges arise, which must be met if the potential of educational virtual worlds is to be realised. These challenges lie in the realms of application design, support for resource creation, and system support. The power of Open Virtual Worlds is illustrated here by presenting three exemplar applications developed on OpenSim. These are a virtual laboratory for experimenting with Internet routing protocols, a reconstruction of Scotland largest and most important religious building, St Andrews Cathedral and a tool for learning about intervention in humanitarian disasters. A number of subjects and educational contexts are considered: contexts include PhD and masters research projects, laboratory sessions as part of accredited degree programs, open days for aspiring entrants, an exhibition held in a science center attended by the “interested public”, parties of primary school students with their teachers as well as scouts and cubs on a days expedition. Subjects areas include, computer science, archeology, art history, history and management. Taken together this work demonstrates the power of virtual worlds as a platform for developing 3D applications that support heterogeneous exploratory learning. There are still challenges to be met for the potential to be realised but the potential is considerable.
Towards the 3D Web with Open Simulator
Oliver, Iain Angus
Miller, Alan Henry David
Allison, Colin
Kennedy, Sarah Elizabeth
Dow, Lisa
Campbell, Anne
Davies, Christopher John
McCaffery, John Philip
https://hdl.handle.net/10023/4137
2023-01-22T12:30:13Z
2013-03-25T00:00:00Z
Continuing advances and reduced costs in computational power, graphics processors and network bandwidth have led to 3D immersive multi-user virtual worlds becoming increasingly accessible while offering an improved and engaging Quality of Experience. At the same time the functionality of the World Wide Web continues to expand alongside the computing infrastructure it runs on and pages can now routinely accommodate many forms of interactive multimedia components as standard features - streaming video for example. Inevitably there is an emerging expectation that the Web will expand further to incorporate immersive 3D environments. This is exciting because humans are well adapted to operating in 3D environments and it is challenging because existing software and skill sets are focused around competencies in 2D Web applications. Open Simulator (OpenSim) is a freely available open source tool-kit that empowers users to create and deploy their own 3D environments in the same way that anyone can create and deploy a Web site. Its characteristics can be seen as a set of references as to how the 3D Web could be instantiated. This paper describes experiments carried out with OpenSim to better understand network and system issues, and presents experience in using OpenSim to develop and deliver applications for education and cultural heritage. Evaluation is based upon observations of these applications in use and measurements of systems both in the lab and in the wild.
2013-03-25T00:00:00Z
Oliver, Iain Angus
Miller, Alan Henry David
Allison, Colin
Kennedy, Sarah Elizabeth
Dow, Lisa
Campbell, Anne
Davies, Christopher John
McCaffery, John Philip
Continuing advances and reduced costs in computational power, graphics processors and network bandwidth have led to 3D immersive multi-user virtual worlds becoming increasingly accessible while offering an improved and engaging Quality of Experience. At the same time the functionality of the World Wide Web continues to expand alongside the computing infrastructure it runs on and pages can now routinely accommodate many forms of interactive multimedia components as standard features - streaming video for example. Inevitably there is an emerging expectation that the Web will expand further to incorporate immersive 3D environments. This is exciting because humans are well adapted to operating in 3D environments and it is challenging because existing software and skill sets are focused around competencies in 2D Web applications. Open Simulator (OpenSim) is a freely available open source tool-kit that empowers users to create and deploy their own 3D environments in the same way that anyone can create and deploy a Web site. Its characteristics can be seen as a set of references as to how the 3D Web could be instantiated. This paper describes experiments carried out with OpenSim to better understand network and system issues, and presents experience in using OpenSim to develop and deliver applications for education and cultural heritage. Evaluation is based upon observations of these applications in use and measurements of systems both in the lab and in the wild.
Interfacing Coq + SSReflect with GAP
Komendantsky, Vladimir
Konovalov, Alexander
Linton, Stephen Alexander
https://hdl.handle.net/10023/3175
2023-04-18T09:46:36Z
2012-09-19T00:00:00Z
We report on an extendable implementation of the communication interface connecting Coq proof assistant to the computational algebra system GAP using the Symbolic Computation Software Composability Protocol (SCSCP). It allows Coq to issue OpenMath requests to a local or remote GAP instances and represent server responses as Coq terms.
Presentation slides and preprint both provided by author. Preprint published in Electronic Notes in Theoretical Computer Science: Proceedings of the 9th International Workshop On User Interfaces for Theorem Provers (UITP10).
2012-09-19T00:00:00Z
Komendantsky, Vladimir
Konovalov, Alexander
Linton, Stephen Alexander
We report on an extendable implementation of the communication interface connecting Coq proof assistant to the computational algebra system GAP using the Symbolic Computation Software Composability Protocol (SCSCP). It allows Coq to issue OpenMath requests to a local or remote GAP instances and represent server responses as Coq terms.
Visual ageing of human faces in three dimensions using morphable models and projection to latent structures
Hunter, David William
Tiddeman, Bernard Paul
https://hdl.handle.net/10023/3066
2023-04-19T00:37:25Z
2009-02-01T00:00:00Z
We present an approach to synthesising the effects of ageing on human face images using three-dimensional modelling. We extract a set of three dimensional face models from a set of two-dimensional face images by fitting a Morphable Model. We propose a method to age these face models using Partial Least Squares to extract from the data-set those factors most related to ageing. These ageing related factors are used to train an individually weighted linear model. We show that this is an effective means of producing an aged face image and compare this method to two other linear ageing methods for ageing face models. This is demonstrated both quantitatively and with perceptual evaluation using human raters.
2009-02-01T00:00:00Z
Hunter, David William
Tiddeman, Bernard Paul
We present an approach to synthesising the effects of ageing on human face images using three-dimensional modelling. We extract a set of three dimensional face models from a set of two-dimensional face images by fitting a Morphable Model. We propose a method to age these face models using Partial Least Squares to extract from the data-set those factors most related to ageing. These ageing related factors are used to train an individually weighted linear model. We show that this is an effective means of producing an aged face image and compare this method to two other linear ageing methods for ageing face models. This is demonstrated both quantitatively and with perceptual evaluation using human raters.
Facebook or Fakebook? : The effects of simulated mobile applications on simulated mobile networks
Parris, Iain Siraj
Ben Abdesslem, Fehmi
Henderson, Tristan
https://hdl.handle.net/10023/2761
2023-04-18T09:44:45Z
2012-01-01T00:00:00Z
The credibility of mobile ad hoc network simulations depends on accurate characterisations of user behaviour, e.g., mobility and application usage. If simulated nodes communicate at different rates to real nodes, or move in an unrealistic fashion, this may have a large impact on the network protocols being simulated and tested. Many future mobile network protocols, however, may also depend on future mobile applications. Different applications may be used at different rates or in different manners. But how can we determine realistic user behaviour for such applications that do not yet exist? One common solution is again simulation, but this time simulation of these future applications. This paper examines differences in user behaviour between a real and simulated mobile social networking application through a user study (n=80). We show that there are distinct differences in privacy behaviour between the real and simulated groups. We then simulate a mobile opportunistic network application using two real-world traces to demonstrate the impact of using real and simulated applications. We find large differences between using real and synthetic models of privacy behaviour, but smaller differences between models derived from the real and simulated applications.
This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/G002606/1].
2012-01-01T00:00:00Z
Parris, Iain Siraj
Ben Abdesslem, Fehmi
Henderson, Tristan
The credibility of mobile ad hoc network simulations depends on accurate characterisations of user behaviour, e.g., mobility and application usage. If simulated nodes communicate at different rates to real nodes, or move in an unrealistic fashion, this may have a large impact on the network protocols being simulated and tested. Many future mobile network protocols, however, may also depend on future mobile applications. Different applications may be used at different rates or in different manners. But how can we determine realistic user behaviour for such applications that do not yet exist? One common solution is again simulation, but this time simulation of these future applications. This paper examines differences in user behaviour between a real and simulated mobile social networking application through a user study (n=80). We show that there are distinct differences in privacy behaviour between the real and simulated groups. We then simulate a mobile opportunistic network application using two real-world traces to demonstrate the impact of using real and simulated applications. We find large differences between using real and synthetic models of privacy behaviour, but smaller differences between models derived from the real and simulated applications.
Computation of infix probabilities for probabilistic context-free grammars
Nederhof, Mark Jan
Satta, Giorgio
https://hdl.handle.net/10023/2426
2023-04-19T00:38:02Z
2011-07-01T00:00:00Z
The notion of infix probability has been introduced in the literature as a generalization of the notion of prefix (or initial substring) probability, motivated by applications in speech recognition and word error correction. For the case where a probabilistic context-free grammar is used as language model, methods for the computation of infix probabilities have been presented in the literature, based on various simplifying assumptions. Here we present a solution that applies to the problem in its full generality.
2011-07-01T00:00:00Z
Nederhof, Mark Jan
Satta, Giorgio
The notion of infix probability has been introduced in the literature as a generalization of the notion of prefix (or initial substring) probability, motivated by applications in speech recognition and word error correction. For the case where a probabilistic context-free grammar is used as language model, methods for the computation of infix probabilities have been presented in the literature, based on various simplifying assumptions. Here we present a solution that applies to the problem in its full generality.
Reliable online social network data collection
Abdesslem, Fehmi Ben
Parris, Iain
Henderson, Tristan
https://hdl.handle.net/10023/2411
2023-04-19T00:37:50Z
2012-01-01T00:00:00Z
Large quantities of information are shared through online social networks, making them attractive sources of data for social network research. When studying the usage of online social networks, these data may not describe properly users’ behaviours. For instance, the data collected often include content shared by the users only, or content accessible to the researchers, hence obfuscating a large amount of data that would help understanding users’ behaviours and privacy concerns. Moreover, the data collection methods employed in experiments may also have an effect on data reliability when participants self-report inacurrate information or are observed while using a simulated application. Understanding the effects of these collection methods on data reliability is paramount for the study of social networks; for understanding user behaviour; for designing socially-aware applications and services; and for mining data collected from such social networks and applications. This chapter reviews previous research which has looked at social network data collection and user behaviour in these networks. We highlight shortcomings in the methods used in these studies, and introduce our own methodology and user study based on the Experience Sampling Method; we claim our methodology leads to the collection of more reliable data by capturing both those data which are shared and not shared. We conclude with suggestions for collecting and mining data from online social networks.
2012-01-01T00:00:00Z
Abdesslem, Fehmi Ben
Parris, Iain
Henderson, Tristan
Large quantities of information are shared through online social networks, making them attractive sources of data for social network research. When studying the usage of online social networks, these data may not describe properly users’ behaviours. For instance, the data collected often include content shared by the users only, or content accessible to the researchers, hence obfuscating a large amount of data that would help understanding users’ behaviours and privacy concerns. Moreover, the data collection methods employed in experiments may also have an effect on data reliability when participants self-report inacurrate information or are observed while using a simulated application. Understanding the effects of these collection methods on data reliability is paramount for the study of social networks; for understanding user behaviour; for designing socially-aware applications and services; and for mining data collected from such social networks and applications. This chapter reviews previous research which has looked at social network data collection and user behaviour in these networks. We highlight shortcomings in the methods used in these studies, and introduce our own methodology and user study based on the Experience Sampling Method; we claim our methodology leads to the collection of more reliable data by capturing both those data which are shared and not shared. We conclude with suggestions for collecting and mining data from online social networks.
Scaling measurement experiments to planet-scale: ethical, regulatory and cultural considerations
Henderson, Tristan Nicholas Hoang
Ben Abdesslem, Fehmi
https://hdl.handle.net/10023/2037
2023-04-19T00:37:48Z
2009-06-01T00:00:00Z
Conducting planet-scale mobility experiments and measurements is of great interest to network researchers for building the next generation of wireless networking technologies, or for studying inter-disciplinary problems in complex networks. There are many technical challenges that need to be addressed before such experiments can take place. But at the same time, there are many non-technical issues that need to be tackled in order to preserve the welfare of participants in these studies. While some of these issues have been addressed in previous small-scale studies, they become increasingly complex when differences between countries need to be taken into account. This position paper highlights some of these issues and argues that they need to be addressed before planet-scale measurement experiments can be conducted. We discuss ethical, regulatory, cultural and privacy issues, and consider how to design measurement systems that will scale up to planet-wide experiments. We motivate our approach by discussing work in measurement of mobile and online social networks.
Workshop held as part of 7th Annual International Conference on Mobile Systems, Applications and Services (MobiSys 2009)
2009-06-01T00:00:00Z
Henderson, Tristan Nicholas Hoang
Ben Abdesslem, Fehmi
Conducting planet-scale mobility experiments and measurements is of great interest to network researchers for building the next generation of wireless networking technologies, or for studying inter-disciplinary problems in complex networks. There are many technical challenges that need to be addressed before such experiments can take place. But at the same time, there are many non-technical issues that need to be tackled in order to preserve the welfare of participants in these studies. While some of these issues have been addressed in previous small-scale studies, they become increasingly complex when differences between countries need to be taken into account. This position paper highlights some of these issues and argues that they need to be addressed before planet-scale measurement experiments can be conducted. We discuss ethical, regulatory, cultural and privacy issues, and consider how to design measurement systems that will scale up to planet-wide experiments. We motivate our approach by discussing work in measurement of mobile and online social networks.
Practical privacy-aware opportunistic networking
Parris, Iain
Henderson, Tristan
https://hdl.handle.net/10023/2011
2023-04-18T09:34:09Z
2011-07-05T00:00:00Z
Opportunistic networks have been the study of much research — in particular on making end-to-end routing efficient. Users’ privacy concerns, however, have not been the subject of much research. What privacy concerns might opportunistic network users have? Is it possible to build opportunistic networks that can mitigate users’ privacy concerns while maintaining routing performance? Our work-to-date has tackled the problem of creating privacy-preserving routing protocols, with less emphasis on discovering users’ actual privacy concerns. We summarise our current results, and describe a future experiment that we have planned to better understand users’ privacy concerns.
2011-07-05T00:00:00Z
Parris, Iain
Henderson, Tristan
Opportunistic networks have been the study of much research — in particular on making end-to-end routing efficient. Users’ privacy concerns, however, have not been the subject of much research. What privacy concerns might opportunistic network users have? Is it possible to build opportunistic networks that can mitigate users’ privacy concerns while maintaining routing performance? Our work-to-date has tackled the problem of creating privacy-preserving routing protocols, with less emphasis on discovering users’ actual privacy concerns. We summarise our current results, and describe a future experiment that we have planned to better understand users’ privacy concerns.
On the selection of connectivity-based metrics for WSNs using a classification of application behaviour
Boyd, Alan
Balasubramaniam, Dharini
Dearle, Alan
Morrison, Ronald
https://hdl.handle.net/10023/1812
2023-04-19T00:37:50Z
2010-06-07T00:00:00Z
This paper addresses a subset of Wireless Sensor Network (WSN) applications in which data is produced by a set of resource-constrained source nodes and forwarded to one or more sink nodes. The performance of such applications is affected by the connectivity of the WSN, since nodes must remain connected in order to transfer data from sources to sinks. Designers use metrics to measure and improve the efficacy of WSN applications. We aim to facilitate the choice of connectivity-based metrics by introducing a classification of WSN applications based on their data collection behaviour and indicating the metrics best suited to the evaluation of particular application classes. We argue that no suitable metric currently exists for a significant class of applications with the following characteristics: 1) application data is periodically routed or disseminated from source nodes to one or more sink nodes, and 2) the application can continue to function with the loss of source nodes although its useful network lifetime diminishes as a result. We present a new metric, known as Connectivity Weighted Transfer, which may be used to evaluate WSN applications with these characteristics.
2010-06-07T00:00:00Z
Boyd, Alan
Balasubramaniam, Dharini
Dearle, Alan
Morrison, Ronald
This paper addresses a subset of Wireless Sensor Network (WSN) applications in which data is produced by a set of resource-constrained source nodes and forwarded to one or more sink nodes. The performance of such applications is affected by the connectivity of the WSN, since nodes must remain connected in order to transfer data from sources to sinks. Designers use metrics to measure and improve the efficacy of WSN applications. We aim to facilitate the choice of connectivity-based metrics by introducing a classification of WSN applications based on their data collection behaviour and indicating the metrics best suited to the evaluation of particular application classes. We argue that no suitable metric currently exists for a significant class of applications with the following characteristics: 1) application data is periodically routed or disseminated from source nodes to one or more sink nodes, and 2) the application can continue to function with the loss of source nodes although its useful network lifetime diminishes as a result. We present a new metric, known as Connectivity Weighted Transfer, which may be used to evaluate WSN applications with these characteristics.
A component-based model and language for wireless sensor network applications
Dearle, Alan
Balasubramaniam, Dharini
Lewis, Jonathan Peter
Morrison, Ronald
https://hdl.handle.net/10023/1811
2023-04-19T00:37:49Z
2008-07-01T00:00:00Z
Wireless sensor networks are often used by experts in many different fields to gather data pertinent to their work. Although their expertise may not include software engineering, these users are expected to produce low-level software for a concurrent, real-time and resource-constrained computing environment. In this paper, we introduce a component-based model for wireless sensor network applications and a language, Insense, for supporting the model. An application is modelled as a composition of interacting components and the application model is preserved in the Insense implementation where active components communicate via typed channels. The primary design criteria for Insense include: to abstract over low-level concerns for ease of programming; to permit worst-case space and time usage of programs to be determinable; to support the fractal composition of components whilst eliminating implicit dependencies between them; and, to facilitate the construction of low footprint programs suitable for resource-constrained devices. This paper presents an overview of the component model and Insense, and demonstrates how they meet the above criteria.
2008-07-01T00:00:00Z
Dearle, Alan
Balasubramaniam, Dharini
Lewis, Jonathan Peter
Morrison, Ronald
Wireless sensor networks are often used by experts in many different fields to gather data pertinent to their work. Although their expertise may not include software engineering, these users are expected to produce low-level software for a concurrent, real-time and resource-constrained computing environment. In this paper, we introduce a component-based model for wireless sensor network applications and a language, Insense, for supporting the model. An application is modelled as a composition of interacting components and the application model is preserved in the Insense implementation where active components communicate via typed channels. The primary design criteria for Insense include: to abstract over low-level concerns for ease of programming; to permit worst-case space and time usage of programs to be determinable; to support the fractal composition of components whilst eliminating implicit dependencies between them; and, to facilitate the construction of low footprint programs suitable for resource-constrained devices. This paper presents an overview of the component model and Insense, and demonstrates how they meet the above criteria.
A collaborative wireless sensor network routing scheme for reducing energy wastage
Boyd, Alan
Balasubramaniam, Dharini
Dearle, Alan
https://hdl.handle.net/10023/1810
2023-04-19T00:37:49Z
2010-05-01T00:00:00Z
A Wireless Sensor Network (WSN) is a network of battery-powered nodes in which data is routed from sources to sinks. Each node consumes energy in order to transmit or receive on its radio. Consequently, an intermediate node that is used by multiple sources will quickly expire. If some sources are unable to route without the presence of that node, any remaining energy they have is wasted. We present a new routing scheme known as node reliance, which rates the degree to which nodes are relied upon in routing. The use of node reliance reduces the contention for intermediate nodes, permitting sources to route to sinks for longer and thus maximising the useful lifetime of the network.
2010-05-01T00:00:00Z
Boyd, Alan
Balasubramaniam, Dharini
Dearle, Alan
A Wireless Sensor Network (WSN) is a network of battery-powered nodes in which data is routed from sources to sinks. Each node consumes energy in order to transmit or receive on its radio. Consequently, an intermediate node that is used by multiple sources will quickly expire. If some sources are unable to route without the presence of that node, any remaining energy they have is wasted. We present a new routing scheme known as node reliance, which rates the degree to which nodes are relied upon in routing. The use of node reliance reduces the contention for intermediate nodes, permitting sources to route to sinks for longer and thus maximising the useful lifetime of the network.
Reflection and reification in process system evolution : experience and opportunity
Greenwood, RM
Balasubramaniam, Dharini
Kirby, Graham Njal Cameron
Mayes, K
Morrison, Ronald
Seet, W
Warboys, BC
Zirintsis, Evangelos
https://hdl.handle.net/10023/1791
2023-04-19T00:37:35Z
2001-01-01T00:00:00Z
Process systems aim to support many people involved in many processes over a long period of time. They provide facilities for storing and manipulating processes in both the representation and enactment domains. This paper argues that process systems should support ongoing transformations between these domains, at any level of granularity. The notion of creating a enactment model instance from a representation is merely one restricted transformation. Especially when process evolution is considered the case for thinking in terms of model instances is weak. This argument is supported by our experience of the ProcessWeb process system facilities for developing and evolving process models. The idea of hyper-code, which supports very general transformations between representation and enactment domains, is described. This offers the prospect of further improvements in this area.
2001-01-01T00:00:00Z
Greenwood, RM
Balasubramaniam, Dharini
Kirby, Graham Njal Cameron
Mayes, K
Morrison, Ronald
Seet, W
Warboys, BC
Zirintsis, Evangelos
Process systems aim to support many people involved in many processes over a long period of time. They provide facilities for storing and manipulating processes in both the representation and enactment domains. This paper argues that process systems should support ongoing transformations between these domains, at any level of granularity. The notion of creating a enactment model instance from a representation is merely one restricted transformation. Especially when process evolution is considered the case for thinking in terms of model instances is weak. This argument is supported by our experience of the ProcessWeb process system facilities for developing and evolving process models. The idea of hyper-code, which supports very general transformations between representation and enactment domains, is described. This offers the prospect of further improvements in this area.
A persistent hyper-programming system
Kirby, Graham Njal Cameron
Morrison, Ronald
Munro, DS
Connor, RCH
Cutts, QI
https://hdl.handle.net/10023/1787
2023-04-18T09:33:56Z
1997-01-01T00:00:00Z
We demonstrate the use of a hyper-programming system in building persistent applications. This allows program representations to contain type-safe links to persistent objects embedded directly within the source code. The benefits include improved efficiency and potential for static program checking, reduced programming effort and the ability to display meaningful source-level representations for first-class procedure values. Hyper-programming represents a completely new style of programming which is only possible in a persistent programming system.
1997-01-01T00:00:00Z
Kirby, Graham Njal Cameron
Morrison, Ronald
Munro, DS
Connor, RCH
Cutts, QI
We demonstrate the use of a hyper-programming system in building persistent applications. This allows program representations to contain type-safe links to persistent objects embedded directly within the source code. The benefits include improved efficiency and potential for static program checking, reduced programming effort and the ability to display meaningful source-level representations for first-class procedure values. Hyper-programming represents a completely new style of programming which is only possible in a persistent programming system.
Linguistic reflection in Java
Kirby, Graham Njal Cameron
Morrison, Ronald
Stemple, David Wilber
https://hdl.handle.net/10023/1758
2023-04-18T09:42:40Z
1998-08-01T00:00:00Z
Reflective systems allow their own structures to be altered from within. Here we are concerned with a style of reflection, called linguistic reflection, which is the ability of a running program to generate new program fragments and to integrate these into its own execution. In particular we describe how this kind of reflection may be provided in the compiler-based, strongly typed object-oriented programming language Java. The advantages of the programming technique include attaining high levels of genericity and accommodating system evolution. These advantages are illustrated by an example taken from persistent programming which shows how linguistic reflection allows functionality (program code) to be generated on demand (Just-In-Time) from a generic specification and integrated into the evolving running program. The technique is evaluated against alternative implementation approaches with respect to efficiency, safety and ease of use.
This work is partially supported by the EPSRC through Grant GR/J 67611 ‘Delivering the Benefits of Persistence to System Construction’
1998-08-01T00:00:00Z
Kirby, Graham Njal Cameron
Morrison, Ronald
Stemple, David Wilber
Reflective systems allow their own structures to be altered from within. Here we are concerned with a style of reflection, called linguistic reflection, which is the ability of a running program to generate new program fragments and to integrate these into its own execution. In particular we describe how this kind of reflection may be provided in the compiler-based, strongly typed object-oriented programming language Java. The advantages of the programming technique include attaining high levels of genericity and accommodating system evolution. These advantages are illustrated by an example taken from persistent programming which shows how linguistic reflection allows functionality (program code) to be generated on demand (Just-In-Time) from a generic specification and integrated into the evolving running program. The technique is evaluated against alternative implementation approaches with respect to efficiency, safety and ease of use.
Support for evolving software architectures in the ArchWare ADL
Morrison, Ron
Kirby, Graham
Balasubramaniam, Dharini
Mickan, Kath
Oquendo, Flavio
Cîmpan, Sorana
Warboys, Brian
Snowdon, Bob
Greenwood, Mark
https://hdl.handle.net/10023/1739
2023-04-19T00:37:29Z
2004-01-01T00:00:00Z
Software that cannot evolve is condemned to atrophy: it cannot accommodate the constant revision and re-negotiation of its business goals nor intercept the potential of new technology. To accommodate change in software systems we have defined an active software architecture to be: dynamic in that the structure and cardinality of the components and interactions are changeable during execution; updatable in that components can be replaced; decomposable in that an executing system may be (partially) stopped and split up into its components and interactions; and reflective in that the specification of components and interactions may be evolved during execution. Here we describe the facilities of the ArchWare architecture description language (ADL) for specifying active architectures. The contribution of the work is the unique combination of concepts including: a pi-calculus based communication and expression language for specifying executable architectures; hyper-code as an underlying representation of system execution that can be used for introspection; a decomposition operator to incrementally break up executing systems; and structural reflection for creating new components and binding them into running systems.
2004-01-01T00:00:00Z
Morrison, Ron
Kirby, Graham
Balasubramaniam, Dharini
Mickan, Kath
Oquendo, Flavio
Cîmpan, Sorana
Warboys, Brian
Snowdon, Bob
Greenwood, Mark
Software that cannot evolve is condemned to atrophy: it cannot accommodate the constant revision and re-negotiation of its business goals nor intercept the potential of new technology. To accommodate change in software systems we have defined an active software architecture to be: dynamic in that the structure and cardinality of the components and interactions are changeable during execution; updatable in that components can be replaced; decomposable in that an executing system may be (partially) stopped and split up into its components and interactions; and reflective in that the specification of components and interactions may be evolved during execution. Here we describe the facilities of the ArchWare architecture description language (ADL) for specifying active architectures. The contribution of the work is the unique combination of concepts including: a pi-calculus based communication and expression language for specifying executable architectures; hyper-code as an underlying representation of system execution that can be used for introspection; a decomposition operator to incrementally break up executing systems; and structural reflection for creating new components and binding them into running systems.
A framework for constraint-based deployment and autonomic management of distributed applications (extended abstract)
Dearle, Alan
Kirby, Graham
McCarthy, Andrew
https://hdl.handle.net/10023/1738
2024-03-18T00:44:55Z
2004-05-01T00:00:00Z
We propose a framework for the deployment and subsequent autonomic management of component-based distributed applications. An initial deployment goal is specified using a declarative constraint language, expressing constraints over aspects such as component-host mappings and component interconnection topology. A constraint solver is used to find a configuration that satisfies the goal, and the configuration is deployed automatically. The deployed application is instrumented to allow subsequent autonomic management. If, during execution, the manager detects that the original goal is no longer being met, the satisfy/deploy process can be repeated automatically in order to generate a revised deployment that does meet the goal.
This work is supported by EPSRC Grants GR/M78403, GR/R51872, GR/S44501 and by EC Framework V IST-2001-32360
2004-05-01T00:00:00Z
Dearle, Alan
Kirby, Graham
McCarthy, Andrew
We propose a framework for the deployment and subsequent autonomic management of component-based distributed applications. An initial deployment goal is specified using a declarative constraint language, expressing constraints over aspects such as component-host mappings and component interconnection topology. A constraint solver is used to find a configuration that satisfies the goal, and the configuration is deployed automatically. The deployed application is instrumented to allow subsequent autonomic management. If, during execution, the manager detects that the original goal is no longer being met, the satisfy/deploy process can be repeated automatically in order to generate a revised deployment that does meet the goal.
A framework for constraint-based deployment and autonomic management of distributed applications
Dearle, Alan
Kirby, Graham
McCarthy, Andrew
https://hdl.handle.net/10023/1731
2023-04-18T09:36:17Z
2004-01-01T00:00:00Z
We propose a framework for deployment and subsequent autonomic management of component-based distributed applications. An initial deployment goal is specified using a declarative constraint language, expressing constraints over aspects such as component-host mappings and component interconnection topology. A constraint solver is used to find a configuration that satisfies the goal, and the configuration is deployed automatically. The deployed application is instrumented to allow subsequent autonomic management. If, during execution, the manager detects that the original goal is no longer being met, the satisfy/deploy process can be repeated automatically in order to generate a revised deployment that does meet the goal.
Submitted to ICAC-04. Extended abstract available from IEEE at DOI:10.1109/ICAC.2004.1301386 This work is supported by EPSRC Grants GR/M78403, GR/R51872, GR/S44501 and by EC Framework V IST-2001-32360
2004-01-01T00:00:00Z
Dearle, Alan
Kirby, Graham
McCarthy, Andrew
We propose a framework for deployment and subsequent autonomic management of component-based distributed applications. An initial deployment goal is specified using a declarative constraint language, expressing constraints over aspects such as component-host mappings and component interconnection topology. A constraint solver is used to find a configuration that satisfies the goal, and the configuration is deployed automatically. The deployed application is instrumented to allow subsequent autonomic management. If, during execution, the manager detects that the original goal is no longer being met, the satisfy/deploy process can be repeated automatically in order to generate a revised deployment that does meet the goal.
A methodology for developing and deploying distributed applications
Kirby, Graham Njal Cameron
Walker, Scott Mervyn
Norcross, Stuart John
Dearle, Alan
https://hdl.handle.net/10023/1729
2024-03-04T00:37:57Z
2005-01-01T00:00:00Z
We describe a methodology for developing and deploying distributed Java applications using a reflective middleware system called RAFDA. We illustrate the methodology by describing how it has been used to develop a peer-to-peer infrastructure, and explain the benefits relative to other techniques. The strengths of the approach are that the application logic can be designed and implemented completely independently of distribution concerns, easing the development task, and that this gives great flexibility to alter distribution decisions late in the development cycle.
2005-01-01T00:00:00Z
Kirby, Graham Njal Cameron
Walker, Scott Mervyn
Norcross, Stuart John
Dearle, Alan
We describe a methodology for developing and deploying distributed Java applications using a reflective middleware system called RAFDA. We illustrate the methodology by describing how it has been used to develop a peer-to-peer infrastructure, and explain the benefits relative to other techniques. The strengths of the approach are that the application logic can be designed and implemented completely independently of distribution concerns, easing the development task, and that this gives great flexibility to alter distribution decisions late in the development cycle.
A flexible and secure deployment framework for distributed applications
Dearle, Alan
Kirby, Graham
McCarthy, Andrew
Diaz y Carballo, Juan-Carlos
https://hdl.handle.net/10023/1728
2023-04-19T00:37:29Z
2004-05-20T00:00:00Z
This paper describes an implemented system which is designed to support the deployment of applications offering distributed services, comprising a number of distributed components. This is achieved by creating high level placement and topology descriptions which drive tools that deploy applications consisting of components running on multiple hosts. The system addresses issues of heterogeneity by providing abstractions over host-specific attributes yielding a homogeneous run-time environment into which components may be deployed. The run-time environments provide secure binding mechanisms that permit deployed components to bind to stored data and services on the hosts on which they are running.
2004-05-20T00:00:00Z
Dearle, Alan
Kirby, Graham
McCarthy, Andrew
Diaz y Carballo, Juan-Carlos
This paper describes an implemented system which is designed to support the deployment of applications offering distributed services, comprising a number of distributed components. This is achieved by creating high level placement and topology descriptions which drive tools that deploy applications consisting of components running on multiple hosts. The system addresses issues of heterogeneity by providing abstractions over host-specific attributes yielding a homogeneous run-time environment into which components may be deployed. The run-time environments provide secure binding mechanisms that permit deployed components to bind to stored data and services on the hosts on which they are running.
Design, implementation and deployment of state machines using a generative approach
Kirby, Graham Njal Cameron
Dearle, Alan
Norcross, Stuart John
https://hdl.handle.net/10023/1669
2023-04-19T00:37:30Z
2008-01-01T00:00:00Z
We describe an approach to designing and implementing a distributed system as a family of related finite state machines, generated from a single abstract model. Various artefacts are generated from each state machine, including diagrams, source-level protocol implementations and documentation. The state machine family formalises the interactions between the components of the distributed system, allowing increased confidence in correctness. Our methodology facilitates the application of state machines to problems for which they would not otherwise be suitable. We illustrate the technique with the example of a Byzantine-fault-tolerant commit protocol used in a distributed storage system, showing how an abstract model can be defined in terms of an abstract state space and various categories of state transitions. We describe how such an abstract model can be deployed in a concrete system, and propose a general methodology for developing systems in this style.
2008-01-01T00:00:00Z
Kirby, Graham Njal Cameron
Dearle, Alan
Norcross, Stuart John
We describe an approach to designing and implementing a distributed system as a family of related finite state machines, generated from a single abstract model. Various artefacts are generated from each state machine, including diagrams, source-level protocol implementations and documentation. The state machine family formalises the interactions between the components of the distributed system, allowing increased confidence in correctness. Our methodology facilitates the application of state machines to problems for which they would not otherwise be suitable. We illustrate the technique with the example of a Byzantine-fault-tolerant commit protocol used in a distributed storage system, showing how an abstract model can be defined in terms of an abstract state space and various categories of state transitions. We describe how such an abstract model can be deployed in a concrete system, and propose a general methodology for developing systems in this style.
Orthogonal persistence revisited
Dearle, Alan
Kirby, Graham
Morrison, Ron
https://hdl.handle.net/10023/1665
2024-03-01T00:38:02Z
2009-07-01T00:00:00Z
The social and economic importance of large bodies of programs and data that are potentially long-lived has attracted much attention in the commercial and research communities. Here we concentrate on a set of methodologies and technologies called persistent programming. In particular we review programming language support for the concept of orthogonal persistence, a technique for the uniform treatment of objects irrespective of their types or longevity. While research in persistent programming has become unfashionable, we show how the concept is beginning to appear as a major component of modern systems. We relate these attempts to the original principles of orthogonal persistence and give a few hints about how the concept may be utilised in the future.
2009-07-01T00:00:00Z
Dearle, Alan
Kirby, Graham
Morrison, Ron
The social and economic importance of large bodies of programs and data that are potentially long-lived has attracted much attention in the commercial and research communities. Here we concentrate on a set of methodologies and technologies called persistent programming. In particular we review programming language support for the concept of orthogonal persistence, a technique for the uniform treatment of objects irrespective of their types or longevity. While research in persistent programming has become unfashionable, we show how the concept is beginning to appear as a major component of modern systems. We relate these attempts to the original principles of orthogonal persistence and give a few hints about how the concept may be utilised in the future.
Probabilistic parsing
Nederhof, Mark Jan
Satta, Giorgio
https://hdl.handle.net/10023/1660
2023-04-19T00:37:32Z
2008-01-01T00:00:00Z
2008-01-01T00:00:00Z
Nederhof, Mark Jan
Satta, Giorgio
Computing partition functions of PCFGs
Nederhof, Mark Jan
Satta, Giorgio
https://hdl.handle.net/10023/1659
2023-04-18T09:42:46Z
2008-10-01T00:00:00Z
We investigate the problem of computing the partition function of a probabilistic context-free grammar, and consider a number of applicable methods. Particular attention is devoted to PCFGs that result from the intersection of another PCFG and a finite automaton. We report experiments involving the Wall Street Journal corpus.
Acknowledgement provided in erratum at DOI:10.1007/s11168-009-9062-1
2008-10-01T00:00:00Z
Nederhof, Mark Jan
Satta, Giorgio
We investigate the problem of computing the partition function of a probabilistic context-free grammar, and consider a number of applicable methods. Particular attention is devoted to PCFGs that result from the intersection of another PCFG and a finite automaton. We report experiments involving the Wall Street Journal corpus.
Privacy-enhanced social network routing in opportunistic networks
Parris, Iain Siraj
Bigwood, Gregory John
Henderson, Tristan Nicholas Hoang
https://hdl.handle.net/10023/1046
2023-04-19T00:37:22Z
2010-03-01T00:00:00Z
Opportunistic networking-forwarding messages in a disconnected mobile ad hoc network via any encountered nodes-offers a new mechanism for exploiting the mobile devices that many users already carry. Forwarding messages in such a network often involves the use of social network routing-sending messages via nodes in the sender or recipient's social network. Simple social network routing, however, may broadcast these social networks, which introduces privacy concerns. This paper introduces two methods for enhancing privacy in social network routing by obfuscating the social network graphs used to inform routing decisions. We evaluate these methods using two real-world datasets, and find that it is possible to obfuscate the social network information without leading to a significant decrease in routing performance.
2010-03-01T00:00:00Z
Parris, Iain Siraj
Bigwood, Gregory John
Henderson, Tristan Nicholas Hoang
Opportunistic networking-forwarding messages in a disconnected mobile ad hoc network via any encountered nodes-offers a new mechanism for exploiting the mobile devices that many users already carry. Forwarding messages in such a network often involves the use of social network routing-sending messages via nodes in the sender or recipient's social network. Simple social network routing, however, may broadcast these social networks, which introduces privacy concerns. This paper introduces two methods for enhancing privacy in social network routing by obfuscating the social network graphs used to inform routing decisions. We evaluate these methods using two real-world datasets, and find that it is possible to obfuscate the social network information without leading to a significant decrease in routing performance.
Patterns of cooperative interaction: Linking ethnomethodology and design
Sommerville, I.
Martin, D.
https://hdl.handle.net/10023/702
2019-03-29T13:25:38Z
2004-01-01T00:00:00Z
Patterns of Cooperative Interaction are regularities in the organisation of work, activity, and interaction amongst participants, and with, through and around artefacts. These patterns are organised around a framework and are inspired by how such regularities are highlighted in ethnomethodologically-informed ethnographic studies of work and technology. They comprise a high level description and two or more comparable examples drawn from specific studies. Our contention is that these patterns form a useful resource for re-using findings from previous field studies, for enabling analysis and considering design in new settings. Previous work on the relationship between ethnomethodology and design has been concerned primarily in providing presentation frameworks and mechanisms, practical advice, schematisations of the ethnomethodologist's role, different possibilities of input at different stages in development, and various conceptualisations of the relationship between study and design. In contrast, this paper seeks to firstly discuss the position of patterns relative to emergent major topics of interest of these studies. Subsequently it seeks to describe the case for the collection of patterns based on findings, their comparison across studies and their general implications for design problems, rather than the concerns of practical and methodological interest outlined in the other work. Special attention is paid to our evaluations and to how they inform how the patterns collection may be read, used and contributed to, as well as to reflections on the composition of the collection as it has emerged. The paper finishes, firstly, with a discussion of how our work relates to other work on patterns, before some closing comments are made on the role of our patterns and ethnomethodology in systems design.
© ACM, 2004. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human Interaction, 11(1): 59-89, 10730516, (March 2004) http://doi.acm.org/10.1145/972648.972651
2004-01-01T00:00:00Z
Sommerville, I.
Martin, D.
Patterns of Cooperative Interaction are regularities in the organisation of work, activity, and interaction amongst participants, and with, through and around artefacts. These patterns are organised around a framework and are inspired by how such regularities are highlighted in ethnomethodologically-informed ethnographic studies of work and technology. They comprise a high level description and two or more comparable examples drawn from specific studies. Our contention is that these patterns form a useful resource for re-using findings from previous field studies, for enabling analysis and considering design in new settings. Previous work on the relationship between ethnomethodology and design has been concerned primarily in providing presentation frameworks and mechanisms, practical advice, schematisations of the ethnomethodologist's role, different possibilities of input at different stages in development, and various conceptualisations of the relationship between study and design. In contrast, this paper seeks to firstly discuss the position of patterns relative to emergent major topics of interest of these studies. Subsequently it seeks to describe the case for the collection of patterns based on findings, their comparison across studies and their general implications for design problems, rather than the concerns of practical and methodological interest outlined in the other work. Special attention is paid to our evaluations and to how they inform how the patterns collection may be read, used and contributed to, as well as to reflections on the composition of the collection as it has emerged. The paper finishes, firstly, with a discussion of how our work relates to other work on patterns, before some closing comments are made on the role of our patterns and ethnomethodology in systems design.