Statistics
http://hdl.handle.net/10023/95
Thu, 24 May 2018 10:17:04 GMT2018-05-24T10:17:04ZStatisticshttp://research-repository.st-andrews.ac.uk:80/bitstream/id/30/Mathematics and statistics.gif
http://hdl.handle.net/10023/95
Chains of subsemigroups
http://hdl.handle.net/10023/13313
We investigate the maximum length of a chain of subsemigroups in various classes of semigroups, such as the full transformation semigroups, the general linear semigroups, and the semigroups of order-preserving transformations of finite chains. In some cases, we give lower bounds for the total number of subsemigroups of these semigroups. We give general results for finite completely regular and finite inverse semigroups. Wherever possible, we state our results in the greatest generality; in particular, we include infinite semigroups where the result is true for these. The length of a subgroup chain in a group is bounded by the logarithm of the group order. This fails for semigroups, but it is perhaps surprising that there is a lower bound for the length of a subsemigroup chain in the full transformation semigroup which is a constant multiple of the semigroup order.
Thu, 01 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/133132017-06-01T00:00:00ZCameron, Peter J.Gadouleau, MaximilienMitchell, James D.Peresse, YannWe investigate the maximum length of a chain of subsemigroups in various classes of semigroups, such as the full transformation semigroups, the general linear semigroups, and the semigroups of order-preserving transformations of finite chains. In some cases, we give lower bounds for the total number of subsemigroups of these semigroups. We give general results for finite completely regular and finite inverse semigroups. Wherever possible, we state our results in the greatest generality; in particular, we include infinite semigroups where the result is true for these. The length of a subgroup chain in a group is bounded by the logarithm of the group order. This fails for semigroups, but it is perhaps surprising that there is a lower bound for the length of a subsemigroup chain in the full transformation semigroup which is a constant multiple of the semigroup order.Timing the landmark events in the evolution of clear cell renal cell cancer : TRACERx renal
http://hdl.handle.net/10023/13131
Clear cell renal cell carcinoma (ccRCC) is characterized by near-universal loss of the short arm of chromosome 3, deleting several tumor suppressor genes. We analyzed whole genomes from 95 biopsies across 33 patients with clear cell renal cell carcinoma. We find hotspots of point mutations in the 5′ UTR of TERT, targeting a MYC-MAX-MAD1 repressor associated with telomere lengthening. The most common structural abnormality generates simultaneous 3p loss and 5q gain (36% patients), typically through chromothripsis. This event occurs in childhood or adolescence, generally as the initiating event that precedes emergence of the tumor’s most recent common ancestor by years to decades. Similar genomic changes drive inherited ccRCC. Modeling differences in age incidence between inherited and sporadic cancers suggests that the number of cells with 3p loss capable of initiating sporadic tumors is no more than a few hundred. Early development of ccRCC follows well-defined evolutionary trajectories, offering opportunity for early intervention.
The work presented in this manuscript was funded by EU FP7 (project PREDICT ID number 259303) and the Wellcome Trust and Cancer Research UK. S.T. is funded by Cancer Research UK (C50947/A18176). S.T., J.L., and M.G. receive funding from the National Institute for Health Research (NIHR) Biomedical Research Centre at the Royal Marsden Hospital and Institute of Cancer Research (A109). J.H.R.F. and A.G.L. were supported by the University of Cambridge, Cancer Research UK (C14303/A17197), and Hutchison Whampoa. K.L. is supported by a UK Medical Research Council Skills Development Fellowship Award. C.S. is funded by Cancer Research UK (TRACERx), the Rosetrees Trust, NovoNordisk Foundation (16584), EU FP7 (projects PREDICT and RESPONSIFY, ID number 259303), the Prostate Cancer Foundation, the Breast Cancer Research Foundation, the European Research Council (THESEUS), and National Institute for Health Research University College London Hospitals Biomedical Research Centre. P.J.C. has a Wellcome Trust Senior Clinical Research Fellowship (WT088340MA).
Thu, 19 Apr 2018 00:00:00 GMThttp://hdl.handle.net/10023/131312018-04-19T00:00:00ZMitchell, Thomas J.Turajlic, SamraRowan, AndrewNicol, DavidFarmery, James H.R.O’Brien, TimMartincorena, InigoTarpey, PatrickAngelopoulos, NicosYates, Lucy R.Butler, Adam P.Raine, KeiranStewart, Grant D.Challacombe, BenFernando, ArchanaLopez, Jose I.Hazell, SteveChandra, AshishChowdhury, SimonRudman, SarahSoultati, AspasiaStamp, GordonFotiadis, NicosPickering, LisaAu, LewisSpain, LaviniaLynch, JoannaStares, MarkTeague, JonMaura, FrancescoWedge, David C.Horswell, StuartChambers, TimLitchfield, KevinXu, HangStewart, AengusElaidi, RezaOudard, StéphaneMcGranahan, NicholasCsabai, IstvanGore, MartinFutreal, P. AndrewLarkin, JamesLynch, Andy G.Szallasi, ZoltanSwanton, CharlesCampbell, Peter J.Clear cell renal cell carcinoma (ccRCC) is characterized by near-universal loss of the short arm of chromosome 3, deleting several tumor suppressor genes. We analyzed whole genomes from 95 biopsies across 33 patients with clear cell renal cell carcinoma. We find hotspots of point mutations in the 5′ UTR of TERT, targeting a MYC-MAX-MAD1 repressor associated with telomere lengthening. The most common structural abnormality generates simultaneous 3p loss and 5q gain (36% patients), typically through chromothripsis. This event occurs in childhood or adolescence, generally as the initiating event that precedes emergence of the tumor’s most recent common ancestor by years to decades. Similar genomic changes drive inherited ccRCC. Modeling differences in age incidence between inherited and sporadic cancers suggests that the number of cells with 3p loss capable of initiating sporadic tumors is no more than a few hundred. Early development of ccRCC follows well-defined evolutionary trajectories, offering opportunity for early intervention.Statistical issues in first-in-human studies on BIA 10-2474: neglected comparison of protocol against practice
http://hdl.handle.net/10023/12740
By setting the regulatory-approved protocol for a suite of first-in-human studies on BIA 10-2474 against the subsequent French investigations, we highlight six key design and statistical issues which reinforce recommendations by a Royal Statistical Society Working Party which were made in the aftermath of cytokine release storm in six healthy volunteers in the UK in 2006. The 6 issues are dose determination, availability of pharmacokinetic results, dosing interval, stopping rules, appraisal by safety committee, and clear algorithm required if combining approvals for single and multiple ascending dose studies.
Funding information: European Union's FP7 programme, Grant/Award Number: 602552
Wed, 15 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/127402017-03-15T00:00:00ZBird, Sheila M.Bailey, Rosemary A.Grieve, Andrew P.Senn, StephenBy setting the regulatory-approved protocol for a suite of first-in-human studies on BIA 10-2474 against the subsequent French investigations, we highlight six key design and statistical issues which reinforce recommendations by a Royal Statistical Society Working Party which were made in the aftermath of cytokine release storm in six healthy volunteers in the UK in 2006. The 6 issues are dose determination, availability of pharmacokinetic results, dosing interval, stopping rules, appraisal by safety committee, and clear algorithm required if combining approvals for single and multiple ascending dose studies.Sesqui-arrays, a generalisation of triple arrays
http://hdl.handle.net/10023/12725
A triple array is a rectangular array containing letters, each letter occurring equally often with no repeats in rows or columns, such that the number of letters common to two rows, two columns, or a row and a column are (possibly different) non-zero constants. Deleting the condition on the letters commonto a row and a column gives a double array. We propose the term sesqui-array for such an array when only the condition on pairs ofcolumns is deleted. Thus all triple arrays are sesqui-arrays.In this paper we give three constructions for sesqui-arrays. The first gives (n+1) x n2 arrays on n(n+1) letters for n>1. (Such an array for n=2 was found by Bagchi.) This construction uses Latin squares.The second uses the Sylvester graph, a subgraph of the Hoffman--Singleton graph, to build a good block design for 36 treatments in 42 blocks of size 6, and then uses this in a 7 x 36 sesqui-array for 42 letters. We also give a construction for K x (K-1)(K-2)/2 sesqui-arrays onK(K-1)/2 letters. This construction uses biplanes. It starts with a block of a biplane and produces an array which satisfies the requirements for a sesqui-array except possibly that of having no repeated letters in a row or column. We show that this condition holds if and only if the Hussain chains for the selected block contain no 4-cycles. A sufficient condition for the construction to give a triple array is that each Hussain chain is a union of 3-cycles; but this condition is not necessary, and we give a few further examples. We also discuss the question of which of these arrays provide good designs for experiments.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/127252018-01-01T00:00:00ZBailey, Rosemary AnneCameron, Peter JephsonNilson, TomasA triple array is a rectangular array containing letters, each letter occurring equally often with no repeats in rows or columns, such that the number of letters common to two rows, two columns, or a row and a column are (possibly different) non-zero constants. Deleting the condition on the letters commonto a row and a column gives a double array. We propose the term sesqui-array for such an array when only the condition on pairs ofcolumns is deleted. Thus all triple arrays are sesqui-arrays.In this paper we give three constructions for sesqui-arrays. The first gives (n+1) x n2 arrays on n(n+1) letters for n>1. (Such an array for n=2 was found by Bagchi.) This construction uses Latin squares.The second uses the Sylvester graph, a subgraph of the Hoffman--Singleton graph, to build a good block design for 36 treatments in 42 blocks of size 6, and then uses this in a 7 x 36 sesqui-array for 42 letters. We also give a construction for K x (K-1)(K-2)/2 sesqui-arrays onK(K-1)/2 letters. This construction uses biplanes. It starts with a block of a biplane and produces an array which satisfies the requirements for a sesqui-array except possibly that of having no repeated letters in a row or column. We show that this condition holds if and only if the Hussain chains for the selected block contain no 4-cycles. A sufficient condition for the construction to give a triple array is that each Hussain chain is a union of 3-cycles; but this condition is not necessary, and we give a few further examples. We also discuss the question of which of these arrays provide good designs for experiments.Telomerecat : a ploidy-agnostic method for estimating telomere length from whole genome sequencing data
http://hdl.handle.net/10023/12591
Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.
Funding: Cancer Research UK Programme Grant to Simon Tavaré (C14303/A17197) (JHRF, AGL, MLS); European Commission through the Horizon 2020 project SOUND (Grant Agreement no. 633974) (AGL).
Mon, 22 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/125912018-01-22T00:00:00ZFarmery, JamesSmith, MikeNIHR BioResource - Rare DiseasesLynch, AndyTelomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales
http://hdl.handle.net/10023/12427
Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consitent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.
Fri, 01 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/124272017-12-01T00:00:00ZYuan, Y.Bachl, F. E.Lindgren, F.Borchers, David LouisIllian, J. B.Buckland, S. T.Rue, H.Gerrodette, T.Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consitent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.Relations among partitions
http://hdl.handle.net/10023/12407
Combinatorialists often consider a balanced incomplete-block design to consist of a set of points, a set of blocks, and an incidence relation between them which satisfies certain conditions. To a statistician, such a design is a set of experimental units with two partitions, one into blocks and the other into treatments: it is the relation between these two partitions which gives the design its properties. The most common binary relations between partitions that occur in statistics are refinement, orthogonality and balance. When there are more than two partitions, the binary relations may not suffice to give all the properties of the system. I shall survey work in this area, including designs such as double Youden rectangles.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/124072017-01-01T00:00:00ZBailey, Rosemary AnneCombinatorialists often consider a balanced incomplete-block design to consist of a set of points, a set of blocks, and an incidence relation between them which satisfies certain conditions. To a statistician, such a design is a set of experimental units with two partitions, one into blocks and the other into treatments: it is the relation between these two partitions which gives the design its properties. The most common binary relations between partitions that occur in statistics are refinement, orthogonality and balance. When there are more than two partitions, the binary relations may not suffice to give all the properties of the system. I shall survey work in this area, including designs such as double Youden rectangles.Primary education in Vietnam and pupil online engagement
http://hdl.handle.net/10023/12107
Purpose This paper focuses on exploring the disparities in social awareness and use of the Internet between urban and rural school children in the North of Vietnam. Approach A total of 525 pupils, aged 9 to 11 years old, randomly selected from 7 urban and rural schools, who are Internet users, participated in the study and consented to responding to a questionnaire adapted from an equivalent European Union (EU) study. A comparative statistical analysis of the responses was then carried out, using IBM SPSS v21, which consisted of a descriptive analysis, an identification of personal self-development opportunities, as well as issues related to pupils’ digital prowess and knowledge of Internet use, and Internet safety, including parental engagement in their offspring’s online activities. Findings The study highlights the fact that children from both the urban and rural regions of the North of Vietnam mostly access the Internet from home, but with more children in the urbanized areas accessing it at school than their rural counterparts. Although children from the rural areas scored lower on all the Internet indicators, such as digital access and online personal experience and awareness, there was no disparity in awareness of Internet risks between the two sub-samples. It is noteworthy that there was no statistically significant gender difference towards online activities that support self-development. In relation to safe Internet usage, children are likely to seek advice from their parents, rather than through teachers or friends. However, they are not yet provided with an effective safety net while exposing themselves to the digital world.
Mon, 08 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/121072018-01-08T00:00:00ZNguyen, QuynhNaguib, RaoufDas, AshishPapathomas, MichailVallar, EdgarWickramasinghe, NilminiSantos, Gil NonatoGalvez, Maria CeciliaNguyen, VietPurpose This paper focuses on exploring the disparities in social awareness and use of the Internet between urban and rural school children in the North of Vietnam. Approach A total of 525 pupils, aged 9 to 11 years old, randomly selected from 7 urban and rural schools, who are Internet users, participated in the study and consented to responding to a questionnaire adapted from an equivalent European Union (EU) study. A comparative statistical analysis of the responses was then carried out, using IBM SPSS v21, which consisted of a descriptive analysis, an identification of personal self-development opportunities, as well as issues related to pupils’ digital prowess and knowledge of Internet use, and Internet safety, including parental engagement in their offspring’s online activities. Findings The study highlights the fact that children from both the urban and rural regions of the North of Vietnam mostly access the Internet from home, but with more children in the urbanized areas accessing it at school than their rural counterparts. Although children from the rural areas scored lower on all the Internet indicators, such as digital access and online personal experience and awareness, there was no disparity in awareness of Internet risks between the two sub-samples. It is noteworthy that there was no statistically significant gender difference towards online activities that support self-development. In relation to safe Internet usage, children are likely to seek advice from their parents, rather than through teachers or friends. However, they are not yet provided with an effective safety net while exposing themselves to the digital world.Modelling the spatial dynamics of non-state terrorism : world study, 2002-2013
http://hdl.handle.net/10023/12067
To this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.
Thu, 07 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/120672017-12-07T00:00:00ZPython, AndréTo this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.Modelling complex dependencies inherent in spatial and spatio-temporal point pattern data
http://hdl.handle.net/10023/12009
Point processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations.
This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation.
Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/120092017-06-23T00:00:00ZJones-Todd, Charlotte MPoint processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations.
This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation.
Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.A path reconstruction method integrating dead-reckoning and position fixes applied to humpback whales
http://hdl.handle.net/10023/11976
BACKGROUND: Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. RESULTS: High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. CONCLUSIONS: By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.
PW received a PhD studentship with matched funding from The Netherlands Ministry of Defence (administered by TNO) and the UK Natural Environment Research Council (NE/J500276/1). The 3S2 project was funded by the US Office of Naval Research (N00014-10-1-0355), the Norwegian Ministry of Defence, and The Netherlands Ministry of Defence. Part of this work was supported by the Multi-study Ocean acoustics Human effects Analysis (MOCHA) project funded by the US Office of Naval Research (N00014-12-1-0204).
Mon, 21 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/119762015-09-21T00:00:00ZWensveen, Paul J.Thomas, LenMiller, Patrick J OBACKGROUND: Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. RESULTS: High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. CONCLUSIONS: By systematically accounting for the observation errors in the position fixes, our model provides a quantitative estimate of location uncertainty that can be appropriately incorporated into analyses of animal movement. This generic method has potential application for a wide range of marine animal species and data recording systems.DESNT : a poor prognosis category of human prostate cancer
http://hdl.handle.net/10023/11922
Background : A critical problem in the clinical management of prostate cancer is that it is highly heterogeneous. Accurate prediction of individual cancer behaviour is therefore not achievable at the time of diagnosis leading to substantial overtreatment. It remains an enigma that, in contrast to breast cancer, unsupervised analyses of global expression profiles have not currently defined robust categories of prostate cancer with distinct clinical outcomes. Objective: To devise a novel classification framework for human prostate cancer based on unsupervised mathematical approaches. Design, setting, and participants: Our analyses are based on the hypothesis that previous attempts to classify prostate cancer have been unsuccessful because individual samples of prostate cancer frequently have heterogeneous compositions. To address this issue, we applied an unsupervised Bayesian procedure called Latent Process Decomposition to four independent prostate cancer transcriptome datasets obtained using samples from prostatectomy patients and containing between 78 and 182 participants. Outcome measurements and statistical analysis: Biochemical failure was assessed using log-rank analysis and Cox regression analysis. Results and limitations: Application of Latent Process Decomposition identified a common process in all four independent datasets examined. Cancers assigned to this process (designated DESNT cancers) are characterized by low expression of a core set of 45 genes, many encoding proteins involved in the cytoskeleton machinery, ion transport, and cell adhesion. For the three datasets with linked prostate-specific antigen failure data following prostatectomy, patients with DESNT cancer exhibited poor outcome relative to other patients (p = 2.65 × 10−5, p = 4.28 × 10−5, and p = 2.98 × 10−8). When these three datasets were combined the independent predictive value of DESNT membership was p = 1.61 × 10−7 compared with p = 1.00 × 10−5 for Gleason sum. A limitation of the study is that only prediction of prostate-specific antigen failure was examined. Conclusions: Our results demonstrate the existence of a novel poor prognosis category of human prostate cancer and will assist in the targeting of therapy, helping avoid treatment-associated morbidity in men with indolent disease. Patient Summary: Prostate cancer, unlike breast cancer, does not have a robust classification framework. We propose that this failure has occurred because prostate cancer samples selected for analysis frequently have heterozygous compositions (individual samples are made up of many different parts that each have different characteristics). Applying a mathematical approach that can overcome this problem we identify a novel poor prognosis category of human prostate cancer called DESNT.
his work was funded by the Bob Champion Cancer Trust, The Masonic Charitable Foundation successor to The Grand Charity, The King Family, and The University of East Anglia. We acknowledge support from Movember, from Prostate Cancer UK, Callum Barton, and from The Andy Ripley Memorial Fund. We would like to acknowledge the support of the National Institute for Health Research which funds the Cambridge Bio-medical Research Centre, Cambridge UK.
Mon, 06 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/119222017-03-06T00:00:00ZLuca, Bogdan-AlexandruBrewer, Daniel S.Edwards, Dylan R.Edwards, SandraWhitaker, Hayley C.Merson, SueDennis, NeningCooper, Rosalin A.Hazell, StevenWarren, Anne Y.Eeles, RosalindLynch, Andy G.Ross-Adams, HelenLamb, Alastair D.Neal, David E.Sethia, KrishnaMills, Robert D.Ball, Richard Y.Curley, HelenClark, JeremyMoulton, VincentCooper, Colin S.Background : A critical problem in the clinical management of prostate cancer is that it is highly heterogeneous. Accurate prediction of individual cancer behaviour is therefore not achievable at the time of diagnosis leading to substantial overtreatment. It remains an enigma that, in contrast to breast cancer, unsupervised analyses of global expression profiles have not currently defined robust categories of prostate cancer with distinct clinical outcomes. Objective: To devise a novel classification framework for human prostate cancer based on unsupervised mathematical approaches. Design, setting, and participants: Our analyses are based on the hypothesis that previous attempts to classify prostate cancer have been unsuccessful because individual samples of prostate cancer frequently have heterogeneous compositions. To address this issue, we applied an unsupervised Bayesian procedure called Latent Process Decomposition to four independent prostate cancer transcriptome datasets obtained using samples from prostatectomy patients and containing between 78 and 182 participants. Outcome measurements and statistical analysis: Biochemical failure was assessed using log-rank analysis and Cox regression analysis. Results and limitations: Application of Latent Process Decomposition identified a common process in all four independent datasets examined. Cancers assigned to this process (designated DESNT cancers) are characterized by low expression of a core set of 45 genes, many encoding proteins involved in the cytoskeleton machinery, ion transport, and cell adhesion. For the three datasets with linked prostate-specific antigen failure data following prostatectomy, patients with DESNT cancer exhibited poor outcome relative to other patients (p = 2.65 × 10−5, p = 4.28 × 10−5, and p = 2.98 × 10−8). When these three datasets were combined the independent predictive value of DESNT membership was p = 1.61 × 10−7 compared with p = 1.00 × 10−5 for Gleason sum. A limitation of the study is that only prediction of prostate-specific antigen failure was examined. Conclusions: Our results demonstrate the existence of a novel poor prognosis category of human prostate cancer and will assist in the targeting of therapy, helping avoid treatment-associated morbidity in men with indolent disease. Patient Summary: Prostate cancer, unlike breast cancer, does not have a robust classification framework. We propose that this failure has occurred because prostate cancer samples selected for analysis frequently have heterozygous compositions (individual samples are made up of many different parts that each have different characteristics). Applying a mathematical approach that can overcome this problem we identify a novel poor prognosis category of human prostate cancer called DESNT.Correlation estimation using components of Japanese candlesticks
http://hdl.handle.net/10023/11901
Using the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/119012016-01-01T00:00:00ZPopov, Valentin MinaUsing the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.Appraising the relevance of DNA copy number loss and gain in prostate cancer using whole genome DNA sequence data
http://hdl.handle.net/10023/11812
A variety of models have been proposed to explain regions of recurrent somatic copy number alteration (SCNA) in human cancer. Our study employs Whole Genome DNA Sequence (WGS) data from tumor samples (n = 103) to comprehensively assess the role of the Knudson two hit genetic model in SCNA generation in prostate cancer. 64 recurrent regions of loss and gain were detected, of which 28 were novel, including regions of loss with more than 15% frequency at Chr4p15.2-p15.1 (15.53%), Chr6q27 (16.50%) and Chr18q12.3 (17.48%). Comprehensive mutation screens of genes, lincRNA encoding sequences, control regions and conserved domains within SCNAs demonstrated that a two-hit genetic model was supported in only a minor proportion of recurrent SCNA losses examined (15/40). We found that recurrent breakpoints and regions of inversion often occur within Knudson model SCNAs, leading to the identification of ZNF292 as a target gene for the deletion at 6q14.3-q15 and NKX3.1 as a two-hit target at 8p21.3-p21.2. The importance of alterations of lincRNA sequences was illustrated by the identification of a novel mutational hotspot at the KCCAT42, FENDRR, CAT1886 and STCAT2 loci at the 16q23.1-q24.3 loss. Our data confirm that the burden of SCNAs is predictive of biochemical recurrence, define nine individual regions that are associated with relapse, and highlight the possible importance of ion channel and G-protein coupled-receptor (GPCR) pathways in cancer development. We concluded that a two-hit genetic model accounts for about one third of SCNA indicating that mechanisms, such haploinsufficiency and epigenetic inactivation, account for the remaining SCNA losses.
We acknowledge support from Cancer Research UK (C5047/A22530, C309/A11566, C368/A6743, A368/A7990, C14303/A17197) and the Dallaglio Foundation. We also acknowledge support from the National Institute of Health Research (NIHR) (The Biomedical Research Centre at The Institute of Cancer Research & The Royal Marsden NHS Foundation Trust and the project "Prostate Cancer: Mechanisms of Progression and Treatment (PROMPT)" [G0500966/75466]). We thank the Wellcome Trust, Bob Champion Cancer Trust, The Orchid Cancer appeal, The RoseTrees Trust, The North West Cancer Research Fund, Big C, The King family, and The Masonic Charitable Foundation for funding. This research is supported by the Francis Crick Institute which receives its core funding from Cancer Research UK (FC001202), the UK Medical Research Council (FC001202), and the Wellcome Trust (FC001202).
Mon, 25 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/118122017-09-25T00:00:00ZCamacho, NiedzicaVan Loo, PeterEdwards, SandraKay, Jonathan D.Matthews, LucyHaase, KerstinClark, JeremyDennis, NeningThomas, SarahKremeyer, BarbaraZamora, JorgeButler, Adam P.Gundem, GunesMerson, SueLuxton, HayleyHawkins, SteveGhori, MohammedMarsden, LukeLambert, AdamKaraszi, KatalinPelvender, GillMassie, Charlie E.Kote-Jarai, ZSofiaRaine, KeiranJones, DavidHowat, William J.Hazell, StevenLivni, NaomiFisher, CyrilOgden, ChristopherKumar, PardeepThompson, AlanNicol, DavidMayer, ErikDudderidge, TimYu, YongweiZhang, HongweiShah, Nimish C.Gnanapragasam, Vincent J.Group, The CRUK-ICGC ProstateIsaacs, WilliamVisakorpi, TapioHamdy, FreddieBerney, DanVerrill, ClareWarren, Anne Y.Wedge, David C.Lynch, Andrew G.Foster, Christopher S.Lu, Yong JieBova, G. StevenWhitaker, Hayley C.McDermott, UltanNeal, David E.Eeles, RosalindCooper, Colin S.Brewer, Daniel S.A variety of models have been proposed to explain regions of recurrent somatic copy number alteration (SCNA) in human cancer. Our study employs Whole Genome DNA Sequence (WGS) data from tumor samples (n = 103) to comprehensively assess the role of the Knudson two hit genetic model in SCNA generation in prostate cancer. 64 recurrent regions of loss and gain were detected, of which 28 were novel, including regions of loss with more than 15% frequency at Chr4p15.2-p15.1 (15.53%), Chr6q27 (16.50%) and Chr18q12.3 (17.48%). Comprehensive mutation screens of genes, lincRNA encoding sequences, control regions and conserved domains within SCNAs demonstrated that a two-hit genetic model was supported in only a minor proportion of recurrent SCNA losses examined (15/40). We found that recurrent breakpoints and regions of inversion often occur within Knudson model SCNAs, leading to the identification of ZNF292 as a target gene for the deletion at 6q14.3-q15 and NKX3.1 as a two-hit target at 8p21.3-p21.2. The importance of alterations of lincRNA sequences was illustrated by the identification of a novel mutational hotspot at the KCCAT42, FENDRR, CAT1886 and STCAT2 loci at the 16q23.1-q24.3 loss. Our data confirm that the burden of SCNAs is predictive of biochemical recurrence, define nine individual regions that are associated with relapse, and highlight the possible importance of ion channel and G-protein coupled-receptor (GPCR) pathways in cancer development. We concluded that a two-hit genetic model accounts for about one third of SCNA indicating that mechanisms, such haploinsufficiency and epigenetic inactivation, account for the remaining SCNA losses.Title redacted
http://hdl.handle.net/10023/11739
Thu, 07 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/117392017-12-07T00:00:00ZSharifi Far, ServehSpatio-temporal variation in click production rates of beaked whales : implications for passive acoustic density estimation
http://hdl.handle.net/10023/11712
Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.
T.A.M. was funded under Grant No. N000141010382 from the Office of Naval Research (LATTE project) and thanks support by CEAUL (funded by FCT - Fundação para a Ciência e a Tecnologia, Portugal, through the project UID/MAT/00006/2013). M.P.J. was funded by a Marie Curie Career Integration Grant and M.P.J. and P.L.T. were funded by MASTS (The Marine Alliance for Science and Technology for Scotland, a research pooling initiative funded by the Scottish Funding Council under grant HR09011 and contributing institutions). L.S.H. thanks the BRS Bahamas team that helped collect the Bahamas data, and A. Bocconcelli. D.H. and L.T. were funded by the Office of Naval Research (Award No. N00014-14-1-0394). N.A.S. was funded by an EU-Horizon 2020 Marie Slodowska Curie fellowship (project ECOSOUND). DTAG data in the Canary Islands were collected with funds from the U.S. Office of Naval Research and Fundación Biodiversidad (EU project LIFE INDEMARES) with permit from the Canary Islands and Spanish governments.
Wed, 01 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/117122017-03-01T00:00:00ZWarren, Victoria E.Marques, Tiago A.Harris, DanielleThomas, LenTyack, Peter L.Aguilar de Soto, NatachaHickmott, Leigh S.Johnson, Mark P.Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.Synthetic lethality between androgen receptor signalling and the PARP pathway in prostate cancer
http://hdl.handle.net/10023/11680
Emerging data demonstrate homologous recombination (HR) defects in castration-resistant prostate cancers, rendering these tumours sensitive to PARP inhibition. Here we demonstrate a direct requirement for the androgen receptor (AR) to maintain HR gene expression and HR activity in prostate cancer. We show that PARP-mediated repair pathways are upregulated in prostate cancer following androgen-deprivation therapy (ADT). Furthermore, upregulation of PARP activity is essential for the survival of prostate cancer cells and we demonstrate a synthetic lethality between ADT and PARP inhibition in vivo. Our data suggest that ADT can functionally impair HR prior to the development of castration resistance and that, this potentially could be exploited therapeutically using PARP inhibitors in combination with androgen-deprivation therapy upfront in advanced or high-risk prostate cancer.Tumours with homologous recombination (HR) defects become sensitive to PARPi. Here, the authors show that androgen receptor (AR) regulates HR and AR inhibition activates the PARP pathway in vivo, thus inhibition of both AR and PARP is required for effective treatment of high risk prostate cancer.
This study was supported by the National Cancer Research Institute (National Institute of Health Research (NIHR) Collaborative Study: ‘Prostate Cancer: Mechanisms of Progression and Treatment (PROMPT)” (grant G0500966/75466). This work was funded by a Cancer Research UK program grant (D.N.), the Swedish Research Council (T.H.), AFA insurance (T.H.), Swedish Cancer Society (T.H.), the Swedish Pain Relief Foundation (T.H.), the Torsten and Ragnar Söderberg Foundation (T.H.), AstraZeneca (T.H.) Centre for Clinical Research (CKF) (F.T.), the Västmanland Research Foundation for Cancer in Vasteras (F.T.), the Henning and Ida Persson Research Foundation (F.T.).
Tue, 29 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/116802017-08-29T00:00:00ZAsim, MohammadTarish, FirasZecchini, Heather ISanjiv, KumarGelali, EleniMassie, Charles EBaridi, AjoebWarren, Anne YZhao, WanfengOgris, ChristophMcDuffus, Leigh-AnneMascalchi, PatriceShaw, GregDev, HarveerWadhwa, KaranWijnhoven, PaulForment, Josep VLyons, Scott RLynch, Andy GO'Neill, CormacZecchini, Vincent RRennie, Paul SBaniahmad, AriaTavaré, SimonMills, Ian GGalanty, YaronCrosetto, NicolaSchultz, NiklasNeal, DavidHelleday, ThomasEmerging data demonstrate homologous recombination (HR) defects in castration-resistant prostate cancers, rendering these tumours sensitive to PARP inhibition. Here we demonstrate a direct requirement for the androgen receptor (AR) to maintain HR gene expression and HR activity in prostate cancer. We show that PARP-mediated repair pathways are upregulated in prostate cancer following androgen-deprivation therapy (ADT). Furthermore, upregulation of PARP activity is essential for the survival of prostate cancer cells and we demonstrate a synthetic lethality between ADT and PARP inhibition in vivo. Our data suggest that ADT can functionally impair HR prior to the development of castration resistance and that, this potentially could be exploited therapeutically using PARP inhibitors in combination with androgen-deprivation therapy upfront in advanced or high-risk prostate cancer.Tumours with homologous recombination (HR) defects become sensitive to PARPi. Here, the authors show that androgen receptor (AR) regulates HR and AR inhibition activates the PARP pathway in vivo, thus inhibition of both AR and PARP is required for effective treatment of high risk prostate cancer.Inference from randomized (factorial) experiments
http://hdl.handle.net/10023/11606
This is a contribution to the discussion of the interesting paper by Ding [Statist. Sci. 32 (2017) 331–345], which contrasts approaches attributed to Neyman and Fisher. I believe that Fisher’s usual assumption was unit-treatment additivity, rather than the “sharp null hypothesis” attributed to him. Fisher also developed the notion of interaction in factorial experiments. His explanation leads directly to the concept of marginality, which is essential for the interpretation of data from any factorial experiment.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/116062017-01-01T00:00:00ZBailey, Rosemary AnneThis is a contribution to the discussion of the interesting paper by Ding [Statist. Sci. 32 (2017) 331–345], which contrasts approaches attributed to Neyman and Fisher. I believe that Fisher’s usual assumption was unit-treatment additivity, rather than the “sharp null hypothesis” attributed to him. Fisher also developed the notion of interaction in factorial experiments. His explanation leads directly to the concept of marginality, which is essential for the interpretation of data from any factorial experiment.Genomic evolution of breast cancer metastasis and relapse
http://hdl.handle.net/10023/11553
Patterns of genomic evolution between primary and metastatic breast cancer have not been studied in large numbers, despite patients with metastatic breast cancer having dismal survival. We sequenced whole genomes or a panel of 365 genes on 299 samples from 170 patients with locally relapsed or metastatic breast cancer. Several lines of analysis indicate that clones seeding metastasis or relapse disseminate late from primary tumors, but continue to acquire mutations, mostly accessing the same mutational processes active in the primary tumor. Most distant metastases acquired driver mutations not seen in the primary tumor, drawing from a wider repertoire of cancer genes than early drivers. These include a number of clinically actionable alterations and mutations inactivating SWI-SNF and JAK2-STAT3 pathways.
A.G.L. and J.H.R.F. were supported by a Cancer Research UK Program Grant to Simon Tavaré (C14303/A17197).
Mon, 14 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/115532017-08-14T00:00:00ZYates, Lucy R.Knappskog, StianWedge, DavidFarmery, James H. R.Gonzalez, SantiagoMartincorena, InigoAlexandrov, Ludmil B.Van Loo, PeterHaugland, Hans KristianLilleng, Peer KaareGundem, GunesGerstung, MoritzPappaemmanuil, ElliGazinska, PatrycjaBhosle, Shriram GJones, DavidRaine, KeiranMudie, LauraLatimer, CalliSawyer, ElinorDesmedt, ChristineSotiriou, ChristosStratton, Michael R.Sieuwerts, Anieta M.Lynch, Andy G.Martens, John W.Richardson, Andrea L.Tutt, AndrewLønning, Per EysteinCampbell, Peter J.Patterns of genomic evolution between primary and metastatic breast cancer have not been studied in large numbers, despite patients with metastatic breast cancer having dismal survival. We sequenced whole genomes or a panel of 365 genes on 299 samples from 170 patients with locally relapsed or metastatic breast cancer. Several lines of analysis indicate that clones seeding metastasis or relapse disseminate late from primary tumors, but continue to acquire mutations, mostly accessing the same mutational processes active in the primary tumor. Most distant metastases acquired driver mutations not seen in the primary tumor, drawing from a wider repertoire of cancer genes than early drivers. These include a number of clinically actionable alterations and mutations inactivating SWI-SNF and JAK2-STAT3 pathways.Measuring temporal trends in biodiversity
http://hdl.handle.net/10023/11534
In 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. In order to assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that were prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising for example from climate change.
Yuan was part-funded by EPSRC/NERC Grant EP/1000917/1 and Marcon by ANR-10-LABX-25-01.
Sun, 01 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/115342017-10-01T00:00:00ZBuckland, S. T.Yuan, Y.Marcon, EricIn 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. In order to assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that were prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising for example from climate change.Authentication and characterisation of a new oesophageal adenocarcinoma cell line : MFD-1
http://hdl.handle.net/10023/11487
New biological tools are required to understand the functional significance of genetic events revealed by whole genome sequencing (WGS) studies in oesophageal adenocarcinoma (OAC). The MFD-1 cell line was isolated from a 55-year-old male with OAC without recombinant-DNA transformation. Somatic genetic variations from MFD-1, tumour, normal oesophagus, and leucocytes were analysed with SNP6. WGS was performed in tumour and leucocytes. RNAseq was performed in MFD-1, and two classic OAC cell lines FLO1 and OE33. Transposase-accessible chromatin sequencing (ATAC-seq) was performed in MFD-1, OE33, and non-neoplastic HET1A cells. Functional studies were performed. MFD-1 had a high SNP genotype concordance with matched germline/tumour. Parental tumour and MFD-1 carried four somatically acquired mutations in three recurrent mutated genes in OAC: TP53, ABCB1 and SEMA5A, not present in FLO-1 or OE33. MFD-1 displayed high expression of epithelial and glandular markers and a unique fingerprint of open chromatin. MFD-1 was tumorigenic in SCID mouse and proliferative and invasive in 3D cultures. The clinical utility of whole genome sequencing projects will be delivered using accurate model systems to develop molecular-phenotype therapeutics. We have described the first such system to arise from the oesophageal International Cancer Genome Consortium project.
Wed, 07 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/114872016-09-07T00:00:00ZGarcia, EdwinHayden, AnnetteBirts, CharlesBritton, EdwardCowie, AndrewPickard, KarenMellone, MassimilianoChoh, ClarisaDerouet, MathieuDuriez, PatrickNoble, FergusWhite, Michael J.Primrose, John N.Strefford, Jonathan C.Rose-Zerilli, MatthewThomas, Gareth J.Ang, YengSharrocks, Andrew D.Fitzgerald, Rebecca C.Underwood, Timothy J.Lynch, Andy G.New biological tools are required to understand the functional significance of genetic events revealed by whole genome sequencing (WGS) studies in oesophageal adenocarcinoma (OAC). The MFD-1 cell line was isolated from a 55-year-old male with OAC without recombinant-DNA transformation. Somatic genetic variations from MFD-1, tumour, normal oesophagus, and leucocytes were analysed with SNP6. WGS was performed in tumour and leucocytes. RNAseq was performed in MFD-1, and two classic OAC cell lines FLO1 and OE33. Transposase-accessible chromatin sequencing (ATAC-seq) was performed in MFD-1, OE33, and non-neoplastic HET1A cells. Functional studies were performed. MFD-1 had a high SNP genotype concordance with matched germline/tumour. Parental tumour and MFD-1 carried four somatically acquired mutations in three recurrent mutated genes in OAC: TP53, ABCB1 and SEMA5A, not present in FLO-1 or OE33. MFD-1 displayed high expression of epithelial and glandular markers and a unique fingerprint of open chromatin. MFD-1 was tumorigenic in SCID mouse and proliferative and invasive in 3D cultures. The clinical utility of whole genome sequencing projects will be delivered using accurate model systems to develop molecular-phenotype therapeutics. We have described the first such system to arise from the oesophageal International Cancer Genome Consortium project.Mutational signatures of ionizing radiation in second malignancies
http://hdl.handle.net/10023/11484
Ionizing radiation is a potent carcinogen, inducing cancer through DNA damage. The signatures of mutations arising in human tissues following in vivo exposure to ionizing radiation have not been documented. Here, we searched for signatures of ionizing radiation in 12 radiation-associated second malignancies of different tumour types. Two signatures of somatic mutation characterize ionizing radiation exposure irrespective of tumour type. Compared with 319 radiation-naive tumours, radiation-associated tumours carry a median extra 201 deletions genome-wide, sized 1-100 base pairs often with microhomology at the junction. Unlike deletions of radiation-naive tumours, these show no variation in density across the genome or correlation with sequence context, replication timing or chromatin structure. Furthermore, we observe a significant increase in balanced inversions in radiation-associated tumours. Both small deletions and inversions generate driver mutations. Thus, ionizing radiation generates distinctive mutational signatures that explain its carcinogenic potential.
Sequencing data have been deposited at the European Genome-Phenome Archive (EGA, http://www.ebi.ac.uk/ega/), which is hosted by the European Bioinformatics Institute; accession numbers EGAS00001000138; EGAS00001000147; EGAS00001000195.
Mon, 12 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/114842016-09-12T00:00:00ZBehjati, SamGundem, GunesWedge, David C.Roberts, Nicola D.Tarpey, Patrick S.Cooke, Susanna L.Van Loo, PeterAlexandrov, Ludmil B.Ramakrishna, ManasaDavies, HelenNik-Zainal, SerenaHardy, ClaireLatimer, CalliRaine, Keiran M.Stebbings, LucyMenzies, AndyJones, DavidShepherd, RebeccaButler, Adam P.Teague, Jon W.Jorgensen, MetteKhatri, BhavishaPillay, NischalanShlien, AdamFutreal, P. AndrewBadie, ChristopheMcDermott, UltanBova, G. StevenRichardson, Andrea L.Flanagan, Adrienne M.Stratton, Michael R.Campbell, Peter J.Lynch, Andrew G.Ionizing radiation is a potent carcinogen, inducing cancer through DNA damage. The signatures of mutations arising in human tissues following in vivo exposure to ionizing radiation have not been documented. Here, we searched for signatures of ionizing radiation in 12 radiation-associated second malignancies of different tumour types. Two signatures of somatic mutation characterize ionizing radiation exposure irrespective of tumour type. Compared with 319 radiation-naive tumours, radiation-associated tumours carry a median extra 201 deletions genome-wide, sized 1-100 base pairs often with microhomology at the junction. Unlike deletions of radiation-naive tumours, these show no variation in density across the genome or correlation with sequence context, replication timing or chromatin structure. Furthermore, we observe a significant increase in balanced inversions in radiation-associated tumours. Both small deletions and inversions generate driver mutations. Thus, ionizing radiation generates distinctive mutational signatures that explain its carcinogenic potential.Whole-genome sequencing of nine esophageal adenocarcinoma cell lines
http://hdl.handle.net/10023/11480
Esophageal adenocarcinoma (EAC) is highly mutated and molecularly heterogeneous. The number of cell lines available for study is limited and their genome has been only partially characterized. The availability of an accurate annotation of their mutational landscape is crucial for accurate experimental design and correct interpretation of genotype-phenotype findings. We performed high coverage, paired end whole genome sequencing on eight EAC cell lines-ESO26, ESO51, FLO-1, JH-EsoAd1, OACM5.1 C, OACP4 C, OE33, SK-GT-4-all verified against original patient material, and one esophageal high grade dysplasia cell line, CP-D. We have made available the aligned sequence data and report single nucleotide variants (SNVs), small insertions and deletions (indels), and copy number alterations, identified by comparison with the human reference genome and known single nucleotide polymorphisms (SNPs). We compare these putative mutations to mutations found in primary tissue EAC samples, to inform the use of these cell lines as a model of EAC.
This work was funded by an MRC Programme Grant to R.C.F. and a Cancer Research UK grant to PAWE. The pipeline for mutation calling is funded by Cancer Research UK as part of the International Cancer Genome Consortium. G.C. is a National Institute for Health Research Lecturer as part of a NIHR professorship grant to R.C.F. AGL is supported by a Cancer Research UK programme grant (C14303/A20406) to Simon Tavaré and the European Commission through the Horizon 2020 project SOUND (Grant Agreement no. 633974).
Fri, 10 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/114802016-06-10T00:00:00ZContino, GianmarcoEldridge, Matthew D.Secrier, MariaBower, LawrenceElliott, Rachael FelsWeaver, JamieLynch, Andy G.Edwards, Paul A.W.Fitzgerald, Rebecca C.Esophageal adenocarcinoma (EAC) is highly mutated and molecularly heterogeneous. The number of cell lines available for study is limited and their genome has been only partially characterized. The availability of an accurate annotation of their mutational landscape is crucial for accurate experimental design and correct interpretation of genotype-phenotype findings. We performed high coverage, paired end whole genome sequencing on eight EAC cell lines-ESO26, ESO51, FLO-1, JH-EsoAd1, OACM5.1 C, OACP4 C, OE33, SK-GT-4-all verified against original patient material, and one esophageal high grade dysplasia cell line, CP-D. We have made available the aligned sequence data and report single nucleotide variants (SNVs), small insertions and deletions (indels), and copy number alterations, identified by comparison with the human reference genome and known single nucleotide polymorphisms (SNPs). We compare these putative mutations to mutations found in primary tissue EAC samples, to inform the use of these cell lines as a model of EAC.Decomposition of mutational context signatures using quadratic programming methods
http://hdl.handle.net/10023/11479
Methods for inferring signatures of mutational contexts from large cancer sequencing data sets are invaluable for biological research, but impractical for clinical application where we require tools that decompose the context data for an individual into signatures. One such method has recently been published using an iterative linear modelling approach. A natural alternative places the problem within a quadratic programming framework and is presented here, where it is seen to offer advantages of speed and accuracy.
AGL was supported in this work by a Cancer Research UK programme grant [C14303/A20406] to Simon Tavaré. AGL acknowledges the support of the University of Cambridge, Cancer Research UK and Hutchison Whampoa Limited. Whole-genome sequencing of oesophageal adenocarcinoma was part of the oesophageal International Cancer Genome Consortium (ICGC) project. The oesophageal ICGC project was funded through a programme and infrastructure grant to Rebecca Fitzgerald as part of the OCCAMS collaboration.
Tue, 07 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/114792016-06-07T00:00:00ZLynch, Andy G.Methods for inferring signatures of mutational contexts from large cancer sequencing data sets are invaluable for biological research, but impractical for clinical application where we require tools that decompose the context data for an individual into signatures. One such method has recently been published using an iterative linear modelling approach. A natural alternative places the problem within a quadratic programming framework and is presented here, where it is seen to offer advantages of speed and accuracy.A tumor DNA complex aberration index is an independent predictor of survival in breast and ovarian cancer
http://hdl.handle.net/10023/11478
Complex focal chromosomal rearrangements in cancer genomes, also called "firestorms", can be scored from DNA copy number data. The complex arm-wise aberration index (CAAI) is a score that captures DNA copy number alterations that appear as focal complex events in tumors, and has potential prognostic value in breast cancer. This study aimed to validate this DNA-based prognostic index in breast cancer and test for the first time its potential prognostic value in ovarian cancer. Copy number alteration (CNA) data from 1950 breast carcinomas (METABRIC cohort) and 508 high-grade serous ovarian carcinomas (TCGA dataset) were analyzed. Cases were classified as CAAI positive if at least one complex focal event was scored. Complex alterations were frequently localized on chromosome 8p (n = 159), 17q (n = 176) and 11q (n = 251). CAAI events on 11q were most frequent in estrogen receptor positive (ER+) cases and on 17q in estrogen receptor negative (ER) cases. We found only a modest correlation between CAAI and the overall rate of genomic instability (GII) and number of breakpoints (r = 0.27 and r = 0.42, p <0.001). Breast cancer specific survival (BCSS), overall survival (OS) and ovarian cancer progression free survival (PUS) were used as clinical end points in Cox proportional hazard model survival analyses. CAAI positive breast cancers (43%) had higher mortality: hazard ratio (HR) of 1.94 (95%CI, 1.62-2.32) for BCSS, and of 1.49 (95%CI, 1.30-1.71) for OS. Representations of the 70-gene and the 21-gene predictors were compared with CAAI in multivariable models and CAAI was independently significant with a Cox adjusted HR of 1.56 (95%CI, 1.23-1.99) for ER+ and 1.55 (95%CI, 1.11-2.18) for ER disease. None of the expression-based predictors were prognostic in the ER subset. We found that a model including CAM and the two expression-based prognostic signatures outperformed a model including the 21-gene and 70-gene signatures but excluding CAAL Inclusion of CAAI in the clinical prognostication tool PREDICT significantly improved its performance. CAAI positive ovarian cancers (52%) also had worse prognosis: HRs of 1.3 (95%CI, 1.1-1.7) for PFS and 1.3 (95%CI, 1.1-1.6) for OS. This study validates CAM as an independent predictor of survival in both ER+ and ER breast cancer and reveals a significant prognostic value for CAAI in high-grade serous ovarian cancer. (C) 2014 The Authors. Published by Elsevier B.V. on behalf of Federation of European Biochemical Societies. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/114782015-01-01T00:00:00ZVollan, Hans Kristian MoenRueda, Oscar M.Chin, Suet-FeungCurtis, ChristinaTurashuili, GulisaShah, SohrabLingjaerde, Ole ChristianYuan, YinyinNg, Charlotte K.Dunning, Mark J.Dicks, EdProvenzano, ElenaSammut, StephenMcKinney, StevenEllis, Ian O.Pinder, SarahPurushotham, ArnieMurphy, Leigh C.Kristensen, Vessela N.Brenton, James D.Pharoah, Paul D. P.Borresen-Dale, Anne-LiseAparicio, SamuelCaldas, CarlosLynch, AndyComplex focal chromosomal rearrangements in cancer genomes, also called "firestorms", can be scored from DNA copy number data. The complex arm-wise aberration index (CAAI) is a score that captures DNA copy number alterations that appear as focal complex events in tumors, and has potential prognostic value in breast cancer. This study aimed to validate this DNA-based prognostic index in breast cancer and test for the first time its potential prognostic value in ovarian cancer. Copy number alteration (CNA) data from 1950 breast carcinomas (METABRIC cohort) and 508 high-grade serous ovarian carcinomas (TCGA dataset) were analyzed. Cases were classified as CAAI positive if at least one complex focal event was scored. Complex alterations were frequently localized on chromosome 8p (n = 159), 17q (n = 176) and 11q (n = 251). CAAI events on 11q were most frequent in estrogen receptor positive (ER+) cases and on 17q in estrogen receptor negative (ER) cases. We found only a modest correlation between CAAI and the overall rate of genomic instability (GII) and number of breakpoints (r = 0.27 and r = 0.42, p <0.001). Breast cancer specific survival (BCSS), overall survival (OS) and ovarian cancer progression free survival (PUS) were used as clinical end points in Cox proportional hazard model survival analyses. CAAI positive breast cancers (43%) had higher mortality: hazard ratio (HR) of 1.94 (95%CI, 1.62-2.32) for BCSS, and of 1.49 (95%CI, 1.30-1.71) for OS. Representations of the 70-gene and the 21-gene predictors were compared with CAAI in multivariable models and CAAI was independently significant with a Cox adjusted HR of 1.56 (95%CI, 1.23-1.99) for ER+ and 1.55 (95%CI, 1.11-2.18) for ER disease. None of the expression-based predictors were prognostic in the ER subset. We found that a model including CAM and the two expression-based prognostic signatures outperformed a model including the 21-gene and 70-gene signatures but excluding CAAL Inclusion of CAAI in the clinical prognostication tool PREDICT significantly improved its performance. CAAI positive ovarian cancers (52%) also had worse prognosis: HRs of 1.3 (95%CI, 1.1-1.7) for PFS and 1.3 (95%CI, 1.1-1.6) for OS. This study validates CAM as an independent predictor of survival in both ER+ and ER breast cancer and reveals a significant prognostic value for CAAI in high-grade serous ovarian cancer. (C) 2014 The Authors. Published by Elsevier B.V. on behalf of Federation of European Biochemical Societies. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).Phenotype specific analyses reveal distinct regulatory mechanism for chronically activated p53
http://hdl.handle.net/10023/11475
The downstream functions of the DNA binding tumor suppressor p53 vary depending on the cellular context, and persistent p53 activation has recently been implicated in tumor suppression and senescence. However, genome-wide information about p53-target gene regulation has been derived mostly from acute genotoxic conditions. Using ChIP-seq and expression data, we have found distinct p53 binding profiles between acutely activated (through DNA damage) and chronically activated (in senescent or pro-apoptotic conditions) p53. Compared to the classical ‘acute’ p53 binding profile, ‘chronic’ p53 peaks were closely associated with CpG-islands. Furthermore, the chronic CpG-island binding of p53 conferred distinct expression patterns between senescent and pro-apoptotic conditions. Using the p53 targets seen in the chronic conditions together with external high-throughput datasets, we have built p53 networks that revealed extensive self-regulatory ‘p53 hubs’ where p53 and many p53 targets can physically interact with each other. Integrating these results with public clinical datasets identified the cancer-associated lipogenic enzyme, SCD, which we found to be directly repressed by p53 through the CpG-island promoter, providing a mechanistic link between p53 and the ‘lipogenic phenotype’, a hallmark of cancer. Our data reveal distinct phenotype associations of chronic p53 targets that underlie specific gene regulatory mechanisms.
This work was supported by the University of Cambridge; Cancer Research UK (C14303/A17197); Hutchison Whampoa. In addition, MasasN and TO were supported by the Human Frontier Science Program (RGY0078/2010); HK was supported by MEXT KAKENHI (Grant Numbers 25116005 and 26291071); KT was supported by the Japan Society for the Promotion of Science (24–8563).
Thu, 19 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/114752015-03-19T00:00:00ZKirschner, KristinaSamarajiwa, Shamith A.Cairns, Jonathan M.Menon, SurajPérez-Mancera, Pedro A.Tomimatsu, KosukeBermejo-Rodriguez, CaminoIto, YokoChandra, TamirNarita, MasakoLyons, Scott K.Lynch, Andy G.Kimura, HiroshiOhbayashi, TetsuyaTavaré, SimonNarita, MasashiThe downstream functions of the DNA binding tumor suppressor p53 vary depending on the cellular context, and persistent p53 activation has recently been implicated in tumor suppression and senescence. However, genome-wide information about p53-target gene regulation has been derived mostly from acute genotoxic conditions. Using ChIP-seq and expression data, we have found distinct p53 binding profiles between acutely activated (through DNA damage) and chronically activated (in senescent or pro-apoptotic conditions) p53. Compared to the classical ‘acute’ p53 binding profile, ‘chronic’ p53 peaks were closely associated with CpG-islands. Furthermore, the chronic CpG-island binding of p53 conferred distinct expression patterns between senescent and pro-apoptotic conditions. Using the p53 targets seen in the chronic conditions together with external high-throughput datasets, we have built p53 networks that revealed extensive self-regulatory ‘p53 hubs’ where p53 and many p53 targets can physically interact with each other. Integrating these results with public clinical datasets identified the cancer-associated lipogenic enzyme, SCD, which we found to be directly repressed by p53 through the CpG-island promoter, providing a mechanistic link between p53 and the ‘lipogenic phenotype’, a hallmark of cancer. Our data reveal distinct phenotype associations of chronic p53 targets that underlie specific gene regulatory mechanisms.Frequent somatic transfer of mitochondrial DNA into the nuclear genome of human cancer cells
http://hdl.handle.net/10023/11474
Mitochondrial genomes are separated from the nuclear genome for most of the cell cycle by the nuclear double membrane, intervening cytoplasm, and the mitochondrial double membrane. Despite these physical barriers, we show that somatically acquired mitochondrial-nuclear genome fusion sequences are present in cancer cells. Most occur in conjunction with intranuclear genomic rearrangements, and the features of the fusion fragments indicate that nonhomologous end joining and/or replication-dependent DNA double-strand break repair are the dominant mechanisms involved. Remarkably, mitochondrial-nuclear genome fusions occur at a similar rate per base pair of DNA as interchromosomal nuclear rearrangements, indicating the presence of a high frequency of contact between mitochondrial and nuclear DNA in some somatic cells. Transmission of mitochondrial DNA to the nuclear genome occurs in neoplastically transformed cells, but we do not exclude the possibility that some mitochondrial-nuclear DNA fusions observed in cancer occurred years earlier in normal somatic cells.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/114742015-06-01T00:00:00ZJu, Young SeokTubio, Jose M.C.Mifsud, WilliamFu, BeiyuanDavies, Helen R.Ramakrishna, ManasaLi, YilongYates, LucyGundem, GunesTarpey, Patrick S.Behjati, SamPapaemmanuil, ElliMartin, SanchaFullam, AnthonyGerstung, MoritzNangalia, JyotiGreen, Anthony R.Caldas, CarlosBorg, ÅkeTutt, AndrewMichael Lee, Ming TaVan'T Veer, Laura J.Tan, Benita K.T.Aparicio, SamuelSpan, Paul N.Martens, John W.M.Knappskog, StianVincent-Salomon, AnneBørresen-Dale, Anne LiseEyfjörd, Jórunn ErlaFlanagan, Adrienne M.Foster, ChristopherNeal, David E.Cooper, ColinEeles, RosalindLakhani, Sunil R.Desmedt, ChristineThomas, GillesRichardson, Andrea L.Purdie, Colin A.Thompson, Alastair M.McDermott, UltanYang, FengtangNik-Zainal, SerenaCampbell, Peter J.Stratton, Michael R.Lynch, AndyMitochondrial genomes are separated from the nuclear genome for most of the cell cycle by the nuclear double membrane, intervening cytoplasm, and the mitochondrial double membrane. Despite these physical barriers, we show that somatically acquired mitochondrial-nuclear genome fusion sequences are present in cancer cells. Most occur in conjunction with intranuclear genomic rearrangements, and the features of the fusion fragments indicate that nonhomologous end joining and/or replication-dependent DNA double-strand break repair are the dominant mechanisms involved. Remarkably, mitochondrial-nuclear genome fusions occur at a similar rate per base pair of DNA as interchromosomal nuclear rearrangements, indicating the presence of a high frequency of contact between mitochondrial and nuclear DNA in some somatic cells. Transmission of mitochondrial DNA to the nuclear genome occurs in neoplastically transformed cells, but we do not exclude the possibility that some mitochondrial-nuclear DNA fusions observed in cancer occurred years earlier in normal somatic cells.Mobile element insertions are frequent in oesophageal adenocarcinomas and can mislead paired-end sequencing analysis
http://hdl.handle.net/10023/11473
Background: Mobile elements are active in the human genome, both in the germline and cancers, where they can mutate driver genes. Results: While analysing whole genome paired-end sequencing of oesophageal adenocarcinomas to find genomic rearrangements, we identified three ways in which new mobile element insertions appear in the data, resembling translocation or insertion junctions: inserts where unique sequence has been transduced by an L1 (Long interspersed element 1) mobile element; novel inserts that are confidently, but often incorrectly, mapped by alignment software to L1s or polyA tracts in the reference sequence; and a combination of these two ways, where different sequences within one insert are mapped to different loci. We identified nine unique sequences that were transduced by neighbouring L1s, both L1s in the reference genome and L1s not present in the reference. Many of the resulting inserts were small fragments that include little or no recognisable mobile element sequence. We found 6 loci in the reference genome to which sequence reads from inserts were frequently mapped, probably erroneously, by alignment software: these were either L1 sequence or particularly long polyA runs. Inserts identified from such apparent rearrangement junctions averaged 16 inserts/tumour, range 0-153 insertions in 43 tumours. However, many inserts would not be detected by mapping the sequences to the reference genome, because they do not include sufficient mappable sequence. To estimate total somatic inserts we searched for polyA sequences that were not present in the matched normal or other normals from the same tumour batch, and were not associated with known polymorphisms. Samples of these candidate inserts were verified by sequencing across them or manual inspection of surrounding reads: at least 85 % were somatic and resembled L1-mediated events, most including L1Hs sequence. Approximately 100 such inserts were detected per tumour on average (range zero to approximately 700). Conclusions: Somatic mobile elements insertions are abundant in these tumours, with over 75 % of cases having a number of novel inserts detected. The inserts create a variety of problems for the interpretation of paired-end sequencing data.
Funding was primarily from Cancer Research UK program grants to RCF and ST (C14478/A15874 and C14303/A17197), with additional support awarded to RCF from UK Medical Research Council, NHS National Institute for Health Research (NIHR), the Experimental Cancer Medicine Centre Network and the NIHR Cambridge Biomedical Research Centre, and Cancer Research UK Project grant C1023/A14545 to PAWE. JMJW was funded by a Wellcome Trust Translational Medicine and Therapeutics grant.
Fri, 10 Jul 2015 00:00:00 GMThttp://hdl.handle.net/10023/114732015-07-10T00:00:00ZPaterson, Anna L.Weaver, Jamie M. J.Eldridge, Matthew D.Tavare, SimonFitzgerald, Rebecca C.Edwards, Paul A. W.Lynch, AndyBackground: Mobile elements are active in the human genome, both in the germline and cancers, where they can mutate driver genes. Results: While analysing whole genome paired-end sequencing of oesophageal adenocarcinomas to find genomic rearrangements, we identified three ways in which new mobile element insertions appear in the data, resembling translocation or insertion junctions: inserts where unique sequence has been transduced by an L1 (Long interspersed element 1) mobile element; novel inserts that are confidently, but often incorrectly, mapped by alignment software to L1s or polyA tracts in the reference sequence; and a combination of these two ways, where different sequences within one insert are mapped to different loci. We identified nine unique sequences that were transduced by neighbouring L1s, both L1s in the reference genome and L1s not present in the reference. Many of the resulting inserts were small fragments that include little or no recognisable mobile element sequence. We found 6 loci in the reference genome to which sequence reads from inserts were frequently mapped, probably erroneously, by alignment software: these were either L1 sequence or particularly long polyA runs. Inserts identified from such apparent rearrangement junctions averaged 16 inserts/tumour, range 0-153 insertions in 43 tumours. However, many inserts would not be detected by mapping the sequences to the reference genome, because they do not include sufficient mappable sequence. To estimate total somatic inserts we searched for polyA sequences that were not present in the matched normal or other normals from the same tumour batch, and were not associated with known polymorphisms. Samples of these candidate inserts were verified by sequencing across them or manual inspection of surrounding reads: at least 85 % were somatic and resembled L1-mediated events, most including L1Hs sequence. Approximately 100 such inserts were detected per tumour on average (range zero to approximately 700). Conclusions: Somatic mobile elements insertions are abundant in these tumours, with over 75 % of cases having a number of novel inserts detected. The inserts create a variety of problems for the interpretation of paired-end sequencing data.Mining human prostate cancer datasets : the “camcAPP” shiny app
http://hdl.handle.net/10023/11472
Funding: Core CRUK funding: MD, AGL, ADL. Academy of Medical Sciences Clinical Lecturer Starter Grant SGCL11 (prinicipal funder of this work): ADL.
Wed, 01 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/114722017-03-01T00:00:00ZDunning, Mark J.Vowler, Sarah L.Lalonde, EmilieRoss-Adams, HelenBoutros, PaulMills, Ian G.Lynch, Andy G.Lamb, Alastair D.HES5 silencing is an early and recurrent change in prostate tumourigenesis
http://hdl.handle.net/10023/11471
Prostate cancer is the most common cancer in men, resulting in over 10 000 deaths/year in the UK. Sequencing and copy number analysis of primary tumours has revealed heterogeneity within tumours and an absence of recurrent founder mutations, consistent with non-genetic disease initiating events. Using methylation profiling in a series of multifocal prostate tumours, we identify promoter methylation of the transcription factor HES5 as an early event in prostate tumourigenesis. We confirm that this epigenetic alteration occurs in 86-97% of cases in two independent prostate cancer cohorts (n=49 and n=39 tumour-normal pairs). Treatment of prostate cancer cells with the demethylating agent 5-aza-2′-deoxycytidine increased HES5 expression and downregulated its transcriptional target HES6, consistent with functional silencing of the HES5 gene in prostate cancer. Finally, we identify and test a transcriptional module involving the AR, ERG, HES1 and HES6 and propose a model for the impact of HES5 silencing on tumourigenesis as a starting point for future functional studies.
The ICGC Prostate UK Group is funded by Cancer Research UK Grant C5047/A14835, by the Dallaglio Foundation, and by The Wellcome Trust. The Human Research Tissue Bank is supported by the NIHR Cambridge Biomedical Research Centre.
Wed, 01 Apr 2015 00:00:00 GMThttp://hdl.handle.net/10023/114712015-04-01T00:00:00ZMassie, Charles E.Spiteri, InmaculadaRoss-Adams, HelenLuxton, HayleyKay, JonathanWhitaker, Hayley C.Dunning, Mark J.Lamb, Alastair D.Ramos-Montoya, AntonioBrewer, Daniel S.Cooper, Colin S.Eeles, RosalindWarren, Anne Y.Tavaré, SimonNeal, David E.Lynch, Andy G.UK Prostate ICGC GroupProstate cancer is the most common cancer in men, resulting in over 10 000 deaths/year in the UK. Sequencing and copy number analysis of primary tumours has revealed heterogeneity within tumours and an absence of recurrent founder mutations, consistent with non-genetic disease initiating events. Using methylation profiling in a series of multifocal prostate tumours, we identify promoter methylation of the transcription factor HES5 as an early event in prostate tumourigenesis. We confirm that this epigenetic alteration occurs in 86-97% of cases in two independent prostate cancer cohorts (n=49 and n=39 tumour-normal pairs). Treatment of prostate cancer cells with the demethylating agent 5-aza-2′-deoxycytidine increased HES5 expression and downregulated its transcriptional target HES6, consistent with functional silencing of the HES5 gene in prostate cancer. Finally, we identify and test a transcriptional module involving the AR, ERG, HES1 and HES6 and propose a model for the impact of HES5 silencing on tumourigenesis as a starting point for future functional studies.multiSNV : a probabilistic approach for improving detection of somatic point mutations from multiple related tumour samples
http://hdl.handle.net/10023/11447
Somatic variant analysis of a tumour sample and its matched normal has been widely used in cancer research to distinguish germline polymorphisms from somatic mutations. However, due to the extensive intratumour heterogeneity of cancer, sequencing data from a single tumour sample may greatly underestimate the overall mutational landscape. In recent studies, multiple spatially or temporally separated tumour samples from the same patient were sequenced to identify the regional distribution of somatic mutations and study intratumour heterogeneity. There are a number of tools to perform somatic variant calling from matched tumour-normal next-generation sequencing (NGS) data; however none of these allow joint analysis of multiple same-patient samples. We discuss the benefits and challenges of multisample somatic variant calling and present multiSNV, a software package for calling single nucleotide variants (SNVs) using NGS data from multiple same-patient samples. Instead of performing multiple pairwise analyses of a single tumour sample and a matched normal, multiSNV jointly considers all available samples under a Bayesian framework to increase sensitivity of calling shared SNVs. By leveraging information from all available samples, multiSNV is able to detect rare mutations with variant allele frequencies down to 3% from whole-exome sequencing experiments.
Funding: Cancer Research UK grant C14303/A17197. Funding for open access charge: University of Cambridge.
Tue, 19 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/114472015-05-19T00:00:00ZJosephidou, MalvinaLynch, Andy G.Tavaré, SimonSomatic variant analysis of a tumour sample and its matched normal has been widely used in cancer research to distinguish germline polymorphisms from somatic mutations. However, due to the extensive intratumour heterogeneity of cancer, sequencing data from a single tumour sample may greatly underestimate the overall mutational landscape. In recent studies, multiple spatially or temporally separated tumour samples from the same patient were sequenced to identify the regional distribution of somatic mutations and study intratumour heterogeneity. There are a number of tools to perform somatic variant calling from matched tumour-normal next-generation sequencing (NGS) data; however none of these allow joint analysis of multiple same-patient samples. We discuss the benefits and challenges of multisample somatic variant calling and present multiSNV, a software package for calling single nucleotide variants (SNVs) using NGS data from multiple same-patient samples. Instead of performing multiple pairwise analyses of a single tumour sample and a matched normal, multiSNV jointly considers all available samples under a Bayesian framework to increase sensitivity of calling shared SNVs. By leveraging information from all available samples, multiSNV is able to detect rare mutations with variant allele frequencies down to 3% from whole-exome sequencing experiments.5-hydroxymethylcytosine marks promoters in colon that resist DNA hypermethylation in cancer
http://hdl.handle.net/10023/11446
Background : The discovery of cytosine hydroxymethylation (5hmC) as a mechanism that potentially controls DNA methylation changes typical of neoplasia prompted us to investigate its behaviour in colon cancer. 5hmC is globally reduced in proliferating cells such as colon tumours and the gut crypt progenitors, from which tumours can arise. Results : Here, we show that colorectal tumours and cancer cells express Ten-Eleven-Translocation (TET) transcripts at levels similar to normal tissues. Genome-wide analyses show that promoters marked by 5hmC in normal tissue, and those identified as TET2 targets in colorectal cancer cells, are resistant to methylation gain in cancer. In vitro studies of TET2 in cancer cells confirm that these promoters are resistant to methylation gain independently of sustained TET2 expression. We also find that a considerable number of the methylation gain-resistant promoters marked by 5hmC in normal colon overlap with those that are marked with poised bivalent histone modifications in embryonic stem cells. Conclusions : Together our results indicate that promoters that acquire 5hmC upon normal colon differentiation are innately resistant to neoplastic hypermethylation by mechanisms that do not require high levels of 5hmC in tumours. Our study highlights the potential of cytosine modifications as biomarkers of cancerous cell proliferation.
The authors would like to acknowledge the support of The University of Cambridge, Cancer Research UK (CRUK SEB-Institute Group Award A ref10182; CRUK Senior fellowship C10112/A11388 to AEKI) and Hutchison Whampoa Limited. The Human Research Tissue Bank is supported by the NIHR Cambridge Biomedical Research Centre. FF is a ULB Professor funded by grants from the F.N.R.S. and Télévie, the IUAP P7/03 programme, the ARC (AUWB-2010-2015 ULB-No 7), the WB Health program and the Fonds Gaston Ithier. Data access: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=jpwzvsowiyuamzs&acc=GSE47592
Wed, 01 Apr 2015 00:00:00 GMThttp://hdl.handle.net/10023/114462015-04-01T00:00:00ZUribe-Lewis, SantiagoStark, RoryCarroll, ThomasDunning, Mark J.Bachman, MartinIto, YokoStojic, LovorkaHalim, SilviaVowler, Sarah L.Lynch, Andy G.Delatte, Benjaminde Bony, Eric J.Colin, LaurenceDefrance, MatthieuKrueger, FelixSilva, Ana Luisaten Hoopen, RogierIbrahim, Ashraf E.K.Fuks, FrançoisMurrell, AdeleBackground : The discovery of cytosine hydroxymethylation (5hmC) as a mechanism that potentially controls DNA methylation changes typical of neoplasia prompted us to investigate its behaviour in colon cancer. 5hmC is globally reduced in proliferating cells such as colon tumours and the gut crypt progenitors, from which tumours can arise. Results : Here, we show that colorectal tumours and cancer cells express Ten-Eleven-Translocation (TET) transcripts at levels similar to normal tissues. Genome-wide analyses show that promoters marked by 5hmC in normal tissue, and those identified as TET2 targets in colorectal cancer cells, are resistant to methylation gain in cancer. In vitro studies of TET2 in cancer cells confirm that these promoters are resistant to methylation gain independently of sustained TET2 expression. We also find that a considerable number of the methylation gain-resistant promoters marked by 5hmC in normal colon overlap with those that are marked with poised bivalent histone modifications in embryonic stem cells. Conclusions : Together our results indicate that promoters that acquire 5hmC upon normal colon differentiation are innately resistant to neoplastic hypermethylation by mechanisms that do not require high levels of 5hmC in tumours. Our study highlights the potential of cytosine modifications as biomarkers of cancerous cell proliferation.Epigenetic and oncogenic regulation of SLC16A7 (MCT2) results in protein over-expression, impacting on signalling and cellular phenotypes in prostate cancer
http://hdl.handle.net/10023/11445
Monocarboxylate Transporter 2 (MCT2) is a major pyruvate transporter encoded by the SLC16A7 gene. Recent studies pointed to a consistent overexpression of MCT2 in prostate cancer (PCa) suggesting MCT2 as a putative biomarker and molecular target. Despite the importance of this observation the mechanisms involved in MCT2 regulation are unknown. Through an integrative analysis we have discovered that selective demethylation of an internal SLC16A7/MCT2 promoter is a recurrent event in independent PCa cohorts. This demethylation is associated with expression of isoforms differing only in 5'-UTR translational control motifs, providing one contributing mechanism for MCT2 protein overexpression in PCa. Genes co-expressed with SLC16A7/MCT2 also clustered in oncogenic-related pathways and effectors of these signalling pathways were found to bind at the SLC16A7/MCT2 gene locus. Finally, MCT2 knock-down attenuated the growth of PCa cells. The present study unveils an unexpected epigenetic regulation of SLC16A7/MCT2 isoforms and identifies a link between SLC16A7/MCT2, Androgen Receptor (AR), ETS-related gene (ERG) and other oncogenic pathways in PCa. These results underscore the importance of combining data from epigenetic, transcriptomic and protein level changes to allow more comprehensive insights into the mechanisms underlying protein expression, that in our case provide additional weight to MCT2 as a candidate biomarker and molecular target in PCa.
Felisbino S. received a fellowship from the Sao Paulo Research Foundation (FAPESP) ref. 2013/08830-2 and 2013/06802-1. Anne Y Warren research time funded by: Cambridge Biomedical Research Centre.
Tue, 02 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/114452015-06-02T00:00:00ZPértega-Gomes, NelmaVizcaino, Jose R.Felisbino, SergioWarren, Anne Y.Shaw, GregKay, JonathanWhitaker, HayleyLynch, Andy G.Fryer, LeeNeal, David E.Massie, Charles E.Monocarboxylate Transporter 2 (MCT2) is a major pyruvate transporter encoded by the SLC16A7 gene. Recent studies pointed to a consistent overexpression of MCT2 in prostate cancer (PCa) suggesting MCT2 as a putative biomarker and molecular target. Despite the importance of this observation the mechanisms involved in MCT2 regulation are unknown. Through an integrative analysis we have discovered that selective demethylation of an internal SLC16A7/MCT2 promoter is a recurrent event in independent PCa cohorts. This demethylation is associated with expression of isoforms differing only in 5'-UTR translational control motifs, providing one contributing mechanism for MCT2 protein overexpression in PCa. Genes co-expressed with SLC16A7/MCT2 also clustered in oncogenic-related pathways and effectors of these signalling pathways were found to bind at the SLC16A7/MCT2 gene locus. Finally, MCT2 knock-down attenuated the growth of PCa cells. The present study unveils an unexpected epigenetic regulation of SLC16A7/MCT2 isoforms and identifies a link between SLC16A7/MCT2, Androgen Receptor (AR), ETS-related gene (ERG) and other oncogenic pathways in PCa. These results underscore the importance of combining data from epigenetic, transcriptomic and protein level changes to allow more comprehensive insights into the mechanisms underlying protein expression, that in our case provide additional weight to MCT2 as a candidate biomarker and molecular target in PCa.Integration of copy number and transcriptomics provides risk stratification in prostate cancer : a discovery and validation cohort study
http://hdl.handle.net/10023/11443
Background : Understanding the heterogeneous genotypes and phenotypes of prostate cancer is fundamental to improving the way we treat this disease. As yet, there are no validated descriptions of prostate cancer subgroups derived from integrated genomics linked with clinical outcome. Methods : In a study of 482 tumour, benign and germline samples from 259 men with primary prostate cancer, we used integrative analysis of copy number alterations (CNA) and array transcriptomics to identify genomic loci that affect expression levels of mRNA in an expression quantitative trait loci (eQTL) approach, to stratify patients into subgroups that we then associated with future clinical behaviour, and compared with either CNA or transcriptomics alone. Findings : We identified five separate patient subgroups with distinct genomic alterations and expression profiles based on 100 discriminating genes in our separate discovery and validation sets of 125 and 103 men. These subgroups were able to consistently predict biochemical relapse (p = 0.0017 and p = 0.016 respectively) and were further validated in a third cohort with long-term follow-up (p = 0.027). We show the relative contributions of gene expression and copy number data on phenotype, and demonstrate the improved power gained from integrative analyses. We confirm alterations in six genes previously associated with prostate cancer ( MAP3K7, MELK, RCBTB2, ELAC2, TPD52, ZBTB4), and also identify 94 genes not previously linked to prostate cancer progression that would not have been detected using either transcript or copy number data alone. We confirm a number of previously published molecular changes associated with high risk disease, including MYC amplification, and NKX3-1, RB1 and PTEN deletions, as well as over-expression of PCA3 and AMACR, and loss of MSMB in tumour tissue. A subset of the 100 genes outperforms established clinical predictors of poor prognosis (PSA, Gleason score), as well as previously published gene signatures (p = 0.0001). We further show how our molecular profiles can be used for the early detection of aggressive cases in a clinical setting, and inform treatment decisions. Interpretation : For the first time in prostate cancer this study demonstrates the importance of integrated genomic analyses incorporating both benign and tumour tissue data in identifying molecular alterations leading to the generation of robust gene sets that are predictive of clinical outcome in independent patient cohorts.
Study data are deposited in NCBI GEO (unique identifier number GSE70770).
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/114432015-09-01T00:00:00ZRoss-Adams, H.Lamb, A. D.Dunning, M. J.Halim, S.Lindberg, J.Massie, C. M.Egevad, L. A.Russell, R.Ramos-Montoya, A.Vowler, S. L.Sharma, N. L.Kay, J.Whitaker, H.Clark, J.Hurst, R.Gnanapragasam, V. J.Shah, N. C.Warren, A. Y.Cooper, C. S.Lynch, A. G.Stark, R.Mills, I. G.Grönberg, H.Neal, D. E.Shaw, GregHori, SatoshiBaridi, AjoebTran, MaxineWadhwa, KaranNelson, AdamPatel, KevalThomas, BenjaminLuxton, HayleyGnanpragasam, VincentDoble, AndrewKastner, ChristofAho, TevitaHaynes, BeverleyPartridge, WendyCromwell, ElizabethSangrasi, AsifBurge, JoGeorge, AnneStearn, SaraCorcoran, MarieCoret, HansleyBasnett, GillianFrancis, InduWhitington, ThomasYuan, YinyinRueda, OscarHadfield, JamesHowat, WillMiller, JodiBrewer, DanielCamCaP Study GroupBackground : Understanding the heterogeneous genotypes and phenotypes of prostate cancer is fundamental to improving the way we treat this disease. As yet, there are no validated descriptions of prostate cancer subgroups derived from integrated genomics linked with clinical outcome. Methods : In a study of 482 tumour, benign and germline samples from 259 men with primary prostate cancer, we used integrative analysis of copy number alterations (CNA) and array transcriptomics to identify genomic loci that affect expression levels of mRNA in an expression quantitative trait loci (eQTL) approach, to stratify patients into subgroups that we then associated with future clinical behaviour, and compared with either CNA or transcriptomics alone. Findings : We identified five separate patient subgroups with distinct genomic alterations and expression profiles based on 100 discriminating genes in our separate discovery and validation sets of 125 and 103 men. These subgroups were able to consistently predict biochemical relapse (p = 0.0017 and p = 0.016 respectively) and were further validated in a third cohort with long-term follow-up (p = 0.027). We show the relative contributions of gene expression and copy number data on phenotype, and demonstrate the improved power gained from integrative analyses. We confirm alterations in six genes previously associated with prostate cancer ( MAP3K7, MELK, RCBTB2, ELAC2, TPD52, ZBTB4), and also identify 94 genes not previously linked to prostate cancer progression that would not have been detected using either transcript or copy number data alone. We confirm a number of previously published molecular changes associated with high risk disease, including MYC amplification, and NKX3-1, RB1 and PTEN deletions, as well as over-expression of PCA3 and AMACR, and loss of MSMB in tumour tissue. A subset of the 100 genes outperforms established clinical predictors of poor prognosis (PSA, Gleason score), as well as previously published gene signatures (p = 0.0001). We further show how our molecular profiles can be used for the early detection of aggressive cases in a clinical setting, and inform treatment decisions. Interpretation : For the first time in prostate cancer this study demonstrates the importance of integrated genomic analyses incorporating both benign and tumour tissue data in identifying molecular alterations leading to the generation of robust gene sets that are predictive of clinical outcome in independent patient cohorts.A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing
http://hdl.handle.net/10023/11442
As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to similar to 100 x shows benefits, as long as the tumour: control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy.
Sequence data for this study have been deposited in the European Genome-phenome Archive (EGA) under the accession number EGAS00001001539.
Wed, 09 Dec 2015 00:00:00 GMThttp://hdl.handle.net/10023/114422015-12-09T00:00:00ZAlioto, Tyler S.Buchhalter, IvoDerdak, SophiaHutter, BarbaraEldridge, Matthew D.Hovig, EivindHeisler, Lawrence E.Beck, Timothy A.Simpson, Jared T.Tonon, LaurieSertier, Anne-SophiePatch, Ann-MarieJaeger, NatalieGinsbach, PhilipDrews, RubenParamasivam, NagarajanKabbe, RolfChotewutmontri, SasithornDiessl, NicollePreviti, ChristopherSchmidt, SabineBrors, BenediktFeuerbach, LarsHeinold, MichaelGroebner, SusanneKorshunov, AndreyTarpey, Patrick S.Butler, Adam P.Hinton, JonathanJones, DavidMenzies, AndrewRaine, KeiranShepherd, RebeccaStebbings, LucyTeague, Jon W.Ribeca, PaoloGiner, Francesc CastroBeltran, SergiRaineri, EmanueleDabad, MarcHeath, Simon C.Gut, MartaDenroche, Robert E.Harding, Nicholas J.Yamaguchi, Takafumi N.Fujimoto, AkihiroNakagawa, HidewakiQuesada, CtorValdes-Mas, RafaelNakken, SigveVodak, DanielBower, LawrenceLynch, Andrew G.Anderson, Charlotte L.Waddell, NicolaPearson, John V.Grimmond, Sean M.Peto, MyronSpellman, PaulHe, MinghuiKandoth, CyriacLee, SeminZhang, JohnLetourneau, LouisMa, SingerSeth, SahilTorrents, DavidXi, LiuWheeler, David A.Lopez-Otin, CarlosCampo, EliasCampbell, Peter J.Boutros, Paul C.Puente, Xose S.Gerhard, Daniela S.Pfister, Stefan M.McPherson, John D.Hudson, Thomas J.Schlesner, MatthiasLichter, PeterEils, RolandJones, David T. W.Gut, Ivo G.As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to similar to 100 x shows benefits, as long as the tumour: control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy.The importance of DNA methylation in prostate cancer development
http://hdl.handle.net/10023/11435
After briefly reviewing the nature of DNA methylation, its general role in cancer and the tools available to interrogate it, we consider the literature surrounding DNA methylation as relating to prostate cancer. Specific consideration is given to recurrent alterations. A list of frequently reported genes is synthesized from 17 studies that have reported on methylation changes in malignant prostate tissue, and we chart the timing of those changes in the diseases history through amalgamation of several previously published data sets. We also review associations with genetic alterations and hormone signalling, before the practicalities of investigating prostate cancer methylation using cell lines are assessed. We conclude by outlining the interplay between DNA methylation and prostate cancer metabolism and their regulation by androgen receptor, with a specific discussion of the mitochondria and their associations with DNA methylation.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/114352017-02-01T00:00:00ZMassie, Charles E.Mills, Ian G.Lynch, Andy G.After briefly reviewing the nature of DNA methylation, its general role in cancer and the tools available to interrogate it, we consider the literature surrounding DNA methylation as relating to prostate cancer. Specific consideration is given to recurrent alterations. A list of frequently reported genes is synthesized from 17 studies that have reported on methylation changes in malignant prostate tissue, and we chart the timing of those changes in the diseases history through amalgamation of several previously published data sets. We also review associations with genetic alterations and hormone signalling, before the practicalities of investigating prostate cancer methylation using cell lines are assessed. We conclude by outlining the interplay between DNA methylation and prostate cancer metabolism and their regulation by androgen receptor, with a specific discussion of the mitochondria and their associations with DNA methylation.The early effects of rapid androgen deprivation on human prostate cancer
http://hdl.handle.net/10023/11434
The androgen receptor (AR) is the dominant growth factor in prostate cancer (PCa). Therefore, understanding how ARs regulate the human transcriptome is of paramount importance. The early effects of castration on human PCa have not previously been studied 27 patients medically castrated with degarelix 7 d before radical prostatectomy. We used mass spectrometry, immunohistochemistry, and gene expression array (validated by reverse transcription-polymerase chain reaction) to compare resected tumour with matched, controlled, untreated PCa tissue. All patients had levels of serum androgen, with reduced levels of intraprostatic androgen at prostatectomy. We observed differential expression of known androgen-regulated genes (TMPRSS2, KLK3, CAMKK2, FKBP5). We identified 749 genes downregulated and 908 genes upregulated following castration. AR regulation of α-methylacyl-CoA racemase expression and three other genes (FAM129A, RAB27A, and KIAA0101) was confirmed. Upregulation of oestrogen receptor 1 (ESR1) expression was observed in malignant epithelia and was associated with differential expression of ESR1-regulated genes and correlated with proliferation (Ki-67 expression). Patient summary : This first-in-man study defines the rapid gene expression changes taking place in prostate cancer (PCa) following castration. Expression levels of the genes that the androgen receptor regulates are predictive of treatment outcome. Upregulation of oestrogen receptor 1 is a mechanism by which PCa cells may survive despite castration.
The authors thank CRUK; the NIHR; the Academy of Medical Sciences (RG:63397); the National Cancer Research Prostate Cancer: Mechanisms of Progression and Treatment (ProMPT) collaborative (G0500966/75466); Hutchison Whampoa Limited; the Human Research Tissue Bank (Addenbrooke’s Hospital, supported by the NIHR Cambridge BRC); and Cancer Research UK.
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/10023/114342016-08-01T00:00:00ZShaw, Greg L.Whitaker, HayleyCorcoran, MarieDunning, Mark J.Luxton, HayleyKay, JonathanMassie, Charlie E.Miller, Jodi L.Lamb, Alastair D.Ross-Adams, HelenRussell, RoslinNelson, Adam W.Eldridge, Matthew D.Lynch, Andrew G.Ramos-Montoya, AntonioMills, Ian G.Taylor, Angela E.Arlt, WiebkeShah, NimishWarren, Anne Y.Neal, David E.The androgen receptor (AR) is the dominant growth factor in prostate cancer (PCa). Therefore, understanding how ARs regulate the human transcriptome is of paramount importance. The early effects of castration on human PCa have not previously been studied 27 patients medically castrated with degarelix 7 d before radical prostatectomy. We used mass spectrometry, immunohistochemistry, and gene expression array (validated by reverse transcription-polymerase chain reaction) to compare resected tumour with matched, controlled, untreated PCa tissue. All patients had levels of serum androgen, with reduced levels of intraprostatic androgen at prostatectomy. We observed differential expression of known androgen-regulated genes (TMPRSS2, KLK3, CAMKK2, FKBP5). We identified 749 genes downregulated and 908 genes upregulated following castration. AR regulation of α-methylacyl-CoA racemase expression and three other genes (FAM129A, RAB27A, and KIAA0101) was confirmed. Upregulation of oestrogen receptor 1 (ESR1) expression was observed in malignant epithelia and was associated with differential expression of ESR1-regulated genes and correlated with proliferation (Ki-67 expression). Patient summary : This first-in-man study defines the rapid gene expression changes taking place in prostate cancer (PCa) following castration. Expression levels of the genes that the androgen receptor regulates are predictive of treatment outcome. Upregulation of oestrogen receptor 1 is a mechanism by which PCa cells may survive despite castration.A comparative analysis of whole genome sequencing of esophageal adenocarcinoma pre- and post-chemotherapy
http://hdl.handle.net/10023/11433
The scientific community has avoided using tissue samples from patients that have been exposed to systemic chemotherapy to infer the genomic landscape of a given cancer. Esophageal adenocarcinoma is a heterogeneous, chemoresistant tumor for which the availability and size of pretreatment endoscopic samples are limiting. This study compares whole-genome sequencing data obtained from chemo-naive and chemo-treated samples. The quality of whole-genomic sequencing data is comparable across all samples regardless of chemotherapy status. Inclusion of samples collected post-chemotherapy increased the proportion of late-stage tumors. When comparing matched pre- and post-chemotherapy samples from 10 cases, the mutational signatures, copy number, and SNV mutational profiles reflect the expected heterogeneity in this disease. Analysis of SNVs in relation to allele-specific copy-number changes pinpoints the common ancestor to a point prior to chemotherapy. For cases in which pre- and post-chemotherapy samples do show substantial differences, the timing of the divergence is near-synchronous with endoreduplication. Comparison across a large prospective cohort (62 treatment-naive, 58 chemotherapy-treated samples) reveals no significant differences in the overall mutation rate, mutation signatures, specific recurrent point mutations, or copy-number events in respect to chemotherapy status. In conclusion, whole-genome sequencing of samples obtained following neoadjuvant chemotherapy is representative of the genomic landscape of esophageal adenocarcinoma. Excluding these samples reduces the material available for cataloging and introduces a bias toward the earlier stages of cancer.
The whole-genome sequencing data from this study have been submitted to the European Genome-phenome Archive (EGA; https://www.ebi.ac.uk/ega/home) under accession number EGAD00001002241. Mutation calls can be found within the ICGC data portal (https://dcc.icgc.org/) under project ID ESADUK and library IDs listed in Supplemental Table S2.
Thu, 01 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/114332017-06-01T00:00:00ZNoorani, AyeshaBornschein, JanLynch, Andy GSecrier, MariaAchilleos, AchilleasEldridge, MatthewBower, LawrenceWeaver, Jamie M.J.Crawte, JasonOng, Chin-AnnShannon, NicholasMacRae, ShonaGrehan, NicolaNutzinger, BarbaraO'Donovan, MariaHardwick, RichardTavaré, SimonFitzgerald, Rebecca C.on behalf of the Oesophageal Cancer Clinical and Molecular Stratification (OCCAMS) ConsortiumThe scientific community has avoided using tissue samples from patients that have been exposed to systemic chemotherapy to infer the genomic landscape of a given cancer. Esophageal adenocarcinoma is a heterogeneous, chemoresistant tumor for which the availability and size of pretreatment endoscopic samples are limiting. This study compares whole-genome sequencing data obtained from chemo-naive and chemo-treated samples. The quality of whole-genomic sequencing data is comparable across all samples regardless of chemotherapy status. Inclusion of samples collected post-chemotherapy increased the proportion of late-stage tumors. When comparing matched pre- and post-chemotherapy samples from 10 cases, the mutational signatures, copy number, and SNV mutational profiles reflect the expected heterogeneity in this disease. Analysis of SNVs in relation to allele-specific copy-number changes pinpoints the common ancestor to a point prior to chemotherapy. For cases in which pre- and post-chemotherapy samples do show substantial differences, the timing of the divergence is near-synchronous with endoreduplication. Comparison across a large prospective cohort (62 treatment-naive, 58 chemotherapy-treated samples) reveals no significant differences in the overall mutation rate, mutation signatures, specific recurrent point mutations, or copy-number events in respect to chemotherapy status. In conclusion, whole-genome sequencing of samples obtained following neoadjuvant chemotherapy is representative of the genomic landscape of esophageal adenocarcinoma. Excluding these samples reduces the material available for cataloging and introduces a bias toward the earlier stages of cancer.New model for estimating glomerular filtration rate in patients with cancer
http://hdl.handle.net/10023/11432
Purpose: The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods: We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 (51Cr) EDTA excretion measurements (51Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results: Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)?adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion: In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care.
T.J. was supported by the Wellcome Trust Translational Medicine and Therapeutics Programme and the University of Cambridge, Department of Oncology (RJAG/076). H.E. was supported by the National Institute of Health Research Cambridge Biomedical Research Centre and the University of Cambridge.
Tue, 01 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/114322017-08-01T00:00:00ZJanowitz, TobiasWilliams, Edward H.Marshall, AndreaAinsworth, NicolaThomas, Peter B.Sammut, Stephen J.Shepherd, ScottWhite, JeffMark, Patrick B.Lynch, Andy G.Jodrell, Duncan I.Tavaré, SimonEarl, HelenaPurpose: The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods: We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 (51Cr) EDTA excretion measurements (51Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results: Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)?adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion: In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care.An analysis of pilot whale vocalization activity using hidden Markov models
http://hdl.handle.net/10023/11194
Vocalizations of cetaceans form a key component of their social interactions. Such vocalization activity is driven by the behavioral states of the whales, which are not directly observable, so that latent-state models are natural candidates for modeling empirical data on vocalizations. In this paper, we use hidden Markov models to analyze calling activity of long-finned pilot whales (Globicephala melas) recorded over three years in the Vestfjord basin off Lofoten, Norway. Baseline models are used to motivate the use of three states, while more complex models are fit to study the influence of covariates on the state-switching dynamics. Our analysis demonstrates the potential usefulness of hidden Markov models in concisely yet accurately describing the stochastic patterns found in animal communication data, thereby providing a framework for drawing meaningful biological inference.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/111942017-01-01T00:00:00ZPopov, Valentin MinaLangrock, RolandDe Ruiter, Stacy LynnVisser, FleurVocalizations of cetaceans form a key component of their social interactions. Such vocalization activity is driven by the behavioral states of the whales, which are not directly observable, so that latent-state models are natural candidates for modeling empirical data on vocalizations. In this paper, we use hidden Markov models to analyze calling activity of long-finned pilot whales (Globicephala melas) recorded over three years in the Vestfjord basin off Lofoten, Norway. Baseline models are used to motivate the use of three states, while more complex models are fit to study the influence of covariates on the state-switching dynamics. Our analysis demonstrates the potential usefulness of hidden Markov models in concisely yet accurately describing the stochastic patterns found in animal communication data, thereby providing a framework for drawing meaningful biological inference.Automatic generation of generalised regular factorial designs
http://hdl.handle.net/10023/11025
The R package planor enables the user to search for, and construct, factorial designs satisfying given conditions. The user specifies the factors and their numbers of levels, the factorial terms which are assumed to be non-zero, and the subset of those which are to be estimated. Both block and treatment factors can be allowed for, and they may have either fixed or random effects, as well as hierarchy relationships. The designs are generalised regular designs, which means that each one is constructed by using a design key and that the underlying theory is that of finite abelian groups. The main theoretical results and algorithms on which planor is based are developed and illustrated, with the emphasis on mathematical rather than programming details. Sections 3–5 are dedicated to the elementary case, when the numbers of levels of all factors are powers of the same prime. The ineligible factorial terms associated with users’ specifications are defined and it is shown how they can be used to search for a design key by a backtrack algorithm. Then the results are extended to the case when different primes are involved, by making use of the Sylow decomposition of finite abelian groups. The proposed approach provides a unified framework for a wide range of factorial designs.
Open Access for this article was paid for by the French Research Agency (ANR), project Escapade (ANR-12-AGRO-0003).
Fri, 01 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/110252017-09-01T00:00:00ZKobilinsky, AndréMonod, HervéBailey, R. A.The R package planor enables the user to search for, and construct, factorial designs satisfying given conditions. The user specifies the factors and their numbers of levels, the factorial terms which are assumed to be non-zero, and the subset of those which are to be estimated. Both block and treatment factors can be allowed for, and they may have either fixed or random effects, as well as hierarchy relationships. The designs are generalised regular designs, which means that each one is constructed by using a design key and that the underlying theory is that of finite abelian groups. The main theoretical results and algorithms on which planor is based are developed and illustrated, with the emphasis on mathematical rather than programming details. Sections 3–5 are dedicated to the elementary case, when the numbers of levels of all factors are powers of the same prime. The ineligible factorial terms associated with users’ specifications are defined and it is shown how they can be used to search for a design key by a backtrack algorithm. Then the results are extended to the case when different primes are involved, by making use of the Sylow decomposition of finite abelian groups. The proposed approach provides a unified framework for a wide range of factorial designs.On the correspondence from Bayesian log-linear modelling to logistic regression modelling with g-priors
http://hdl.handle.net/10023/10854
Consider a set of categorical variables where at least one of them is binary. The log-linear model that describes the counts in the resulting contingency table implies a specific logistic regression model, with the binary variable as the outcome. Within the Bayesian framework, the g-prior and mixtures of g-priors are commonly assigned to the parameters of a generalized linear model. We prove that assigning a g-prior (or a mixture of g-priors) to the parameters of a certain log-linear model designates a g-prior (or a mixture of g-priors) on the parameters of the corresponding logistic regression. By deriving an asymptotic result, and with numerical illustrations, we demonstrate that when a g-prior is adopted, this correspondence extends to the posterior distribution of the model parameters. Thus, it is valid to translate inferences from fitting a log-linear model to inferences within the logistic regression framework, with regard to the presence of main effects and interaction terms.
Thu, 01 Mar 2018 00:00:00 GMThttp://hdl.handle.net/10023/108542018-03-01T00:00:00ZPapathomas, MichailConsider a set of categorical variables where at least one of them is binary. The log-linear model that describes the counts in the resulting contingency table implies a specific logistic regression model, with the binary variable as the outcome. Within the Bayesian framework, the g-prior and mixtures of g-priors are commonly assigned to the parameters of a generalized linear model. We prove that assigning a g-prior (or a mixture of g-priors) to the parameters of a certain log-linear model designates a g-prior (or a mixture of g-priors) on the parameters of the corresponding logistic regression. By deriving an asymptotic result, and with numerical illustrations, we demonstrate that when a g-prior is adopted, this correspondence extends to the posterior distribution of the model parameters. Thus, it is valid to translate inferences from fitting a log-linear model to inferences within the logistic regression framework, with regard to the presence of main effects and interaction terms.Estimating Key Largo woodrat abundance using spatially explicit capture–recapture and trapping point transects
http://hdl.handle.net/10023/10625
The Key Largo woodrat (Neotoma floridana smalli) is an endangered rodent with a restricted geographic range and small population size. Establishing an efficient monitoring program of its abundance has been problematic; previous trapping designs have not worked well because the species is sparsely distributed. We compared Key Largo woodrat abundance estimates in Key Largo, Florida, USA, obtained using trapping point transects (TPT) and spatially explicit capture–recapture (SECR) based on statistical properties, survey effort, practicality, and cost. Both methods combine aspects of distance sampling with capture–recapture, but TPT relies on radiotracking individuals to estimate detectability and SECR relies on repeat capture information to estimate densities of home ranges. Abundance estimates using TPT in the spring of 2007 and 2008 were 333 woodrats (CV = 0.46) and 696 (CV = 0.43), respectively. Abundance estimates using SECR in the spring, summer, and winter of 2007 were 97 (CV = 0.31), 334 (CV = 0.26), and 433 (CV = 0.20) animals, respectively. Trapping point transects used approximately 960 person-hours and 1,010 trap-nights/season. Spatially explicit capture–recapture used approximately 500 person-hours and 6,468 trap-nights/season. Significant time was saved in the SECR survey by setting large numbers of traps close together, minimizing time walking between traps. Trapping point transects were practical to implement in the field, and valuable auxiliary information on Key Largo woodrat behavior was obtained via radiocollaring. In this particular study, detectability of the woodrat using TPT was very low and consequently the SECR method was more efficient. Both methods require a substantial investment in survey effort to detect any change in abundance because of large uncertainty in estimates.
JMP was funded by Disney's Animal Programs, the US Fish and Wildlife Service and University of St Andrews.
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/106252016-06-01T00:00:00ZPotts, Joanne MarieBuckland, Stephen TerrenceThomas, LenSavage, AnneThe Key Largo woodrat (Neotoma floridana smalli) is an endangered rodent with a restricted geographic range and small population size. Establishing an efficient monitoring program of its abundance has been problematic; previous trapping designs have not worked well because the species is sparsely distributed. We compared Key Largo woodrat abundance estimates in Key Largo, Florida, USA, obtained using trapping point transects (TPT) and spatially explicit capture–recapture (SECR) based on statistical properties, survey effort, practicality, and cost. Both methods combine aspects of distance sampling with capture–recapture, but TPT relies on radiotracking individuals to estimate detectability and SECR relies on repeat capture information to estimate densities of home ranges. Abundance estimates using TPT in the spring of 2007 and 2008 were 333 woodrats (CV = 0.46) and 696 (CV = 0.43), respectively. Abundance estimates using SECR in the spring, summer, and winter of 2007 were 97 (CV = 0.31), 334 (CV = 0.26), and 433 (CV = 0.20) animals, respectively. Trapping point transects used approximately 960 person-hours and 1,010 trap-nights/season. Spatially explicit capture–recapture used approximately 500 person-hours and 6,468 trap-nights/season. Significant time was saved in the SECR survey by setting large numbers of traps close together, minimizing time walking between traps. Trapping point transects were practical to implement in the field, and valuable auxiliary information on Key Largo woodrat behavior was obtained via radiocollaring. In this particular study, detectability of the woodrat using TPT was very low and consequently the SECR method was more efficient. Both methods require a substantial investment in survey effort to detect any change in abundance because of large uncertainty in estimates.Assigning stranded bottlenose dolphins to source stocks using stable isotope ratios following the Deepwater Horizon oil spill
http://hdl.handle.net/10023/10588
The potential for stranded dolphins to serve as a tool for monitoring free-ranging populations would be enhanced if their stocks of origin were known. We used stable isotopes of carbon, nitrogen, and sulfur from skin to assign stranded bottlenose dolphins Tursiops truncatus to different habitats, as a proxy for stocks (demographically independent populations), following the Deepwater Horizon oil spill. Model results from biopsy samples collected from dolphins from known habitats (n = 205) resulted in an 80.5% probability of correct assignment. These results were applied to data from stranded dolphins (n = 217), resulting in predicted assignment probabilities of 0.473, 0.172, and 0.355 to Estuarine, Barrier Island (BI), and Coastal stocks, respectively. Differences were found west and east of the Mississippi River, with more Coastal dolphins stranding in western Louisiana and more Estuarine dolphins stranding in Mississippi. Within the Estuarine East Stock, 2 groups were identified, one predominantly associated with Mississippi and Alabama estuaries and another with western Florida. δ15N values were higher in stranded samples for both Estuarine and BI stocks, potentially indicating nutritional stress. High probabilities of correct assignment of the biopsy samples indicate predictable variation in stable isotopes and fidelity to habitat. The power of δ34S to discriminate habitats relative to salinity was essential. Stable isotopes may provide guidance regarding where additional testing is warranted to confirm demographic independence and aid in determining the source habitat of stranded dolphins, thus increasing the value of biological data collected from stranded individuals.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105882017-01-31T00:00:00ZHohn, A. A.Thomas, L.Carmichael, R. H.Litz, J.Clemons-Chevis, C.Shippee, S. F.Sinclair, C.Smith, S.Speakman, T. R.Tumlin, M. C.Zolman, E. S.The potential for stranded dolphins to serve as a tool for monitoring free-ranging populations would be enhanced if their stocks of origin were known. We used stable isotopes of carbon, nitrogen, and sulfur from skin to assign stranded bottlenose dolphins Tursiops truncatus to different habitats, as a proxy for stocks (demographically independent populations), following the Deepwater Horizon oil spill. Model results from biopsy samples collected from dolphins from known habitats (n = 205) resulted in an 80.5% probability of correct assignment. These results were applied to data from stranded dolphins (n = 217), resulting in predicted assignment probabilities of 0.473, 0.172, and 0.355 to Estuarine, Barrier Island (BI), and Coastal stocks, respectively. Differences were found west and east of the Mississippi River, with more Coastal dolphins stranding in western Louisiana and more Estuarine dolphins stranding in Mississippi. Within the Estuarine East Stock, 2 groups were identified, one predominantly associated with Mississippi and Alabama estuaries and another with western Florida. δ15N values were higher in stranded samples for both Estuarine and BI stocks, potentially indicating nutritional stress. High probabilities of correct assignment of the biopsy samples indicate predictable variation in stable isotopes and fidelity to habitat. The power of δ34S to discriminate habitats relative to salinity was essential. Stable isotopes may provide guidance regarding where additional testing is warranted to confirm demographic independence and aid in determining the source habitat of stranded dolphins, thus increasing the value of biological data collected from stranded individuals.Quantifying injury to common bottlenose dolphins from the Deepwater Horizon oil spill using an age-, sex- and class-structured population model
http://hdl.handle.net/10023/10587
Field studies documented increased mortality, adverse health effects, and reproductive failure in common bottlenose dolphins Tursiops truncatus following the Deepwater Horizon (DWH) oil spill. In order to determine the appropriate type and amount of restoration needed to compensate for losses, the overall extent of injuries to dolphins had to be quantified. Simply counting dead individuals does not consider long-term impacts to populations, such as the loss of future reproductive potential from mortality of females, or the chronic health effects that continue to compromise survival long after acute effects subside. Therefore, we constructed a sex- and agestructured model of population growth and included additional class structure to represent dolphins exposed and unexposed to DWH oil. The model was applied for multiple stocks to predict injured population trajectories using estimates of post-spill survival and reproductive rates. Injured trajectories were compared to baseline trajectories that were expected had the DWH incident not occurred. Two principal measures of injury were computed: (1) lost cetacean years (LCY); the difference between baseline and injured population size, summed over the modeled time period, and (2) time to recovery; the number of years for the stock to recover to within 95% of baseline. For the dolphin stock in Barataria Bay, Louisiana, the estimated LCY was substantial: 30 347 LCY (95% CI: 11 511 to 89 746). Estimated time to recovery was 39 yr (95% CI: 24 to 80). Similar recovery timelines were predicted for stocks in the Mississippi River Delta, Mississippi Sound, Mobile Bay and the Northern Coastal Stock.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105872017-01-31T00:00:00ZSchwacke, Lori H.Thomas, LenWells, Randall S.McFee, Wayne E.Hohn, Aleta A.Mullin, Keith D.Zolman, Eric S.Quigley, Brian M.Rowles, Teri K.Schwacke, John H.Field studies documented increased mortality, adverse health effects, and reproductive failure in common bottlenose dolphins Tursiops truncatus following the Deepwater Horizon (DWH) oil spill. In order to determine the appropriate type and amount of restoration needed to compensate for losses, the overall extent of injuries to dolphins had to be quantified. Simply counting dead individuals does not consider long-term impacts to populations, such as the loss of future reproductive potential from mortality of females, or the chronic health effects that continue to compromise survival long after acute effects subside. Therefore, we constructed a sex- and agestructured model of population growth and included additional class structure to represent dolphins exposed and unexposed to DWH oil. The model was applied for multiple stocks to predict injured population trajectories using estimates of post-spill survival and reproductive rates. Injured trajectories were compared to baseline trajectories that were expected had the DWH incident not occurred. Two principal measures of injury were computed: (1) lost cetacean years (LCY); the difference between baseline and injured population size, summed over the modeled time period, and (2) time to recovery; the number of years for the stock to recover to within 95% of baseline. For the dolphin stock in Barataria Bay, Louisiana, the estimated LCY was substantial: 30 347 LCY (95% CI: 11 511 to 89 746). Estimated time to recovery was 39 yr (95% CI: 24 to 80). Similar recovery timelines were predicted for stocks in the Mississippi River Delta, Mississippi Sound, Mobile Bay and the Northern Coastal Stock.Where were they from? Modelling the source stock of dolphins stranded after the Deepwater Horizon oil spill using genetic and stable isotope data
http://hdl.handle.net/10023/10586
Understanding the source stock of common bottlenose dolphins Tursiops truncatus that stranded in the northern Gulf of Mexico subsequent to the Deepwater Horizon oil spill was essential to accurately quantify injury and apportion individuals to the appropriate stock. The aim of this study, part of the Natural Resource Damage Assessment (NRDA), was to estimate the proportion of the 932 recorded strandings between May 2010 and June 2014 that came from coastal versus bay, sound and estuary (BSE) stocks. Four sources of relevant information were available on overlapping subsets totaling 336 (39%) of the strandings: genetic stock assignment, stable isotope ratios, photo-ID and individual genetic-ID. We developed a hierarchical Bayesian model for combining these sources that weighted each data source for each stranding according to a measure of estimated precision: the effective sample size (ESS). The photo- and genetic-ID data were limited and considered to potentially introduce biases, so these data sources were excluded from analyses used in the NRDA. Estimates were calculated separately in 3 regions: East (of the Mississippi outflow), West (of the Mississippi outflow through Vermilion Bay, Louisiana) and Western Louisiana (west of Vermilion Bay to the Texas-Louisiana border); the estimated proportions of coastal strandings were, respectively 0.215 (95% CI: 0.169-0.263), 0.016 (0.036-0.099) and 0.622 (0.487-0.803). This method represents a general approach for integrating multiple sources of information that have differing uncertainties.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105862017-01-31T00:00:00ZThomas, L.Booth, C. G.Rosel, P. E.Hohn, A.Litz, J.Schwacke, L. H.Understanding the source stock of common bottlenose dolphins Tursiops truncatus that stranded in the northern Gulf of Mexico subsequent to the Deepwater Horizon oil spill was essential to accurately quantify injury and apportion individuals to the appropriate stock. The aim of this study, part of the Natural Resource Damage Assessment (NRDA), was to estimate the proportion of the 932 recorded strandings between May 2010 and June 2014 that came from coastal versus bay, sound and estuary (BSE) stocks. Four sources of relevant information were available on overlapping subsets totaling 336 (39%) of the strandings: genetic stock assignment, stable isotope ratios, photo-ID and individual genetic-ID. We developed a hierarchical Bayesian model for combining these sources that weighted each data source for each stranding according to a measure of estimated precision: the effective sample size (ESS). The photo- and genetic-ID data were limited and considered to potentially introduce biases, so these data sources were excluded from analyses used in the NRDA. Estimates were calculated separately in 3 regions: East (of the Mississippi outflow), West (of the Mississippi outflow through Vermilion Bay, Louisiana) and Western Louisiana (west of Vermilion Bay to the Texas-Louisiana border); the estimated proportions of coastal strandings were, respectively 0.215 (95% CI: 0.169-0.263), 0.016 (0.036-0.099) and 0.622 (0.487-0.803). This method represents a general approach for integrating multiple sources of information that have differing uncertainties.Survival, density, and abundance of common bottlenose dolphins in Barataria Bay (USA) following the Deepwater Horizon oil spill
http://hdl.handle.net/10023/10580
To assess potential impacts of the Deepwater Horizon oil spill in April 2010, we conducted boat-based photo-identification surveys for common bottlenose dolphins Tursiops truncatus in Barataria Bay, Louisiana, USA (~230 km2, located 167 km WNW of the spill center). Crews logged 838 h of survey effort along pre-defined routes on 10 occasions between late June 2010 and early May 2014. We applied a previously unpublished spatial version of the robust design capture-recapture model to estimate survival and density. This model used photo locations to estimate density in the absence of study area boundaries and to separate mortality from permanent emigration. To estimate abundance, we applied density estimates to saltwater (salinity > ~8 ppt) areas of the bay where telemetry data suggested that dolphins reside. Annual dolphin survival varied between 0.80 and 0.85 (95% CIs varied from 0.77 to 0.90) over 3 yr following the Deepwater Horizon spill. In 2 non-oiled bays (in Florida and North Carolina), historic survival averages approximately 0.95. From June to November 2010, abundance increased from 1300 (95% CI ± ~130) to 3100 (95% CI ± ~400), then declined and remained between ~1600 and ~2400 individuals until spring 2013. In fall 2013 and spring 2014, abundance increased again to approximately 3100 individuals. Dolphin abundance prior to the spill was unknown, but we hypothesize that some dolphins moved out of the sampled area, probably northward into marshes, prior to initiation of our surveys in late June 2010, and later immigrated back into the sampled area.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105802017-01-31T00:00:00ZMcDonald, Trent L.Hornsby, Fawn E.Speakman, Todd R.Zolman, Eric S.Mullin, Keith D.Sinclair, CarrieRosel, Patricia E.Thomas, LenSchwacke, Lori H.To assess potential impacts of the Deepwater Horizon oil spill in April 2010, we conducted boat-based photo-identification surveys for common bottlenose dolphins Tursiops truncatus in Barataria Bay, Louisiana, USA (~230 km2, located 167 km WNW of the spill center). Crews logged 838 h of survey effort along pre-defined routes on 10 occasions between late June 2010 and early May 2014. We applied a previously unpublished spatial version of the robust design capture-recapture model to estimate survival and density. This model used photo locations to estimate density in the absence of study area boundaries and to separate mortality from permanent emigration. To estimate abundance, we applied density estimates to saltwater (salinity > ~8 ppt) areas of the bay where telemetry data suggested that dolphins reside. Annual dolphin survival varied between 0.80 and 0.85 (95% CIs varied from 0.77 to 0.90) over 3 yr following the Deepwater Horizon spill. In 2 non-oiled bays (in Florida and North Carolina), historic survival averages approximately 0.95. From June to November 2010, abundance increased from 1300 (95% CI ± ~130) to 3100 (95% CI ± ~400), then declined and remained between ~1600 and ~2400 individuals until spring 2013. In fall 2013 and spring 2014, abundance increased again to approximately 3100 individuals. Dolphin abundance prior to the spill was unknown, but we hypothesize that some dolphins moved out of the sampled area, probably northward into marshes, prior to initiation of our surveys in late June 2010, and later immigrated back into the sampled area.Delphinid echolocation click detection probability on near-seafloor sensors
http://hdl.handle.net/10023/10512
The probability of detecting echolocating delphinids on a near-seafloor sensor was estimated using two Monte Carlo simulation methods. One method estimated the probability of detecting a single click (cue counting); the other estimated the probability of detecting a group of delphinids (group counting). Echolocation click beam pattern and source level assumptions strongly influenced detectability predictions by the cue counting model. Group detectability was also influenced by assumptions about group behaviors. Model results were compared to in situ recordings of encounters with Risso's dolphin (Grampus griseus) and presumed pantropical spotted dolphin (Stenella attenuata) from a near-seafloor four-channel tracking sensor deployed in the Gulf of Mexico (25.537°N 84.632°W, depth 1220 m). Horizontal detection range, received level and estimated source level distributions from localized encounters were compared with the model predictions. Agreement between in situ results and model predictions suggests that simulations can be used to estimate detection probabilities when direct distance estimation is not available.
Funding for HARP data collection and analysis was provided by the Natural Resource Damage Assessment partners (20105138) and the Center for the Integrated Modeling and Analysis of the Gulf Ecosystem (C-IMAGE) Consortium of the BP/Gulf of Mexico Research Initiative (SA 12-10/GoMRI-007). The analyses and opinions expressed are those of the authors and not necessarily those of the funding entities. This research was made possible by a grant from The Gulf of Mexico Research Initiative/C-IMAGE II.
Thu, 01 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/105122016-09-01T00:00:00ZFrasier, Kaitlin E.Wiggins, Sean M.Harris, DanielleMarques, Tiago A.Thomas, LenHildebrand, John A.The probability of detecting echolocating delphinids on a near-seafloor sensor was estimated using two Monte Carlo simulation methods. One method estimated the probability of detecting a single click (cue counting); the other estimated the probability of detecting a group of delphinids (group counting). Echolocation click beam pattern and source level assumptions strongly influenced detectability predictions by the cue counting model. Group detectability was also influenced by assumptions about group behaviors. Model results were compared to in situ recordings of encounters with Risso's dolphin (Grampus griseus) and presumed pantropical spotted dolphin (Stenella attenuata) from a near-seafloor four-channel tracking sensor deployed in the Gulf of Mexico (25.537°N 84.632°W, depth 1220 m). Horizontal detection range, received level and estimated source level distributions from localized encounters were compared with the model predictions. Agreement between in situ results and model predictions suggests that simulations can be used to estimate detection probabilities when direct distance estimation is not available.A simulation approach to assessing environmental risk of sound exposure to marine mammals
http://hdl.handle.net/10023/10382
Intense underwater sounds caused by military sonar, seismic surveys, and pile driving can harm acoustically sensitive marine mammals. Many jurisdictions require such activities to undergo marine mammal impact assessments to guide mitigation. However, the ability to assess impacts in a rigorous, quantitative way is hindered by large knowledge gaps concerning hearing ability, sensitivity, and behavioral responses to noise exposure. We describe a simulation-based framework, called SAFESIMM (Statistical Algorithms For Estimating the Sonar Influence on Marine Megafauna), that can be used to calculate the numbers of agents (animals) likely to be affected by intense underwater sounds. We illustrate the simulation framework using two species that are likely to be affected by marine renewable energy developments in UK waters: gray seal (Halichoerus grypus) and harbor porpoise (Phocoena phocoena). We investigate three sources of uncertainty: How sound energy is perceived by agents with differing hearing abilities; how agents move in response to noise (i.e., the strength and directionality of their evasive movements); and the way in which these responses may interact with longer term constraints on agent movement. The estimate of received sound exposure level (SEL) is influenced most strongly by the weighting function used to account for the specie's presumed hearing ability. Strongly directional movement away from the sound source can cause modest reductions (~5 dB) in SEL over the short term (periods of less than 10 days). Beyond 10 days, the way in which agents respond to noise exposure has little or no effect on SEL, unless their movements are constrained by natural boundaries. Most experimental studies of noise impacts have been short-term. However, data are needed on long-term effects because uncertainty about predicted SELs accumulates over time. Synthesis and applications. Simulation frameworks offer a powerful way to explore, understand, and estimate effects of cumulative sound exposure on marine mammals and to quantify associated levels of uncertainty. However, they can often require subjective decisions that have important consequences for management recommendations, and the basis for these decisions must be clearly described.
Sat, 01 Apr 2017 00:00:00 GMThttp://hdl.handle.net/10023/103822017-04-01T00:00:00ZDonovan, Carl R.Harris, Catriona M.Milazzo, LorenzoHarwood, JohnMarshall, LauraWilliams, RobIntense underwater sounds caused by military sonar, seismic surveys, and pile driving can harm acoustically sensitive marine mammals. Many jurisdictions require such activities to undergo marine mammal impact assessments to guide mitigation. However, the ability to assess impacts in a rigorous, quantitative way is hindered by large knowledge gaps concerning hearing ability, sensitivity, and behavioral responses to noise exposure. We describe a simulation-based framework, called SAFESIMM (Statistical Algorithms For Estimating the Sonar Influence on Marine Megafauna), that can be used to calculate the numbers of agents (animals) likely to be affected by intense underwater sounds. We illustrate the simulation framework using two species that are likely to be affected by marine renewable energy developments in UK waters: gray seal (Halichoerus grypus) and harbor porpoise (Phocoena phocoena). We investigate three sources of uncertainty: How sound energy is perceived by agents with differing hearing abilities; how agents move in response to noise (i.e., the strength and directionality of their evasive movements); and the way in which these responses may interact with longer term constraints on agent movement. The estimate of received sound exposure level (SEL) is influenced most strongly by the weighting function used to account for the specie's presumed hearing ability. Strongly directional movement away from the sound source can cause modest reductions (~5 dB) in SEL over the short term (periods of less than 10 days). Beyond 10 days, the way in which agents respond to noise exposure has little or no effect on SEL, unless their movements are constrained by natural boundaries. Most experimental studies of noise impacts have been short-term. However, data are needed on long-term effects because uncertainty about predicted SELs accumulates over time. Synthesis and applications. Simulation frameworks offer a powerful way to explore, understand, and estimate effects of cumulative sound exposure on marine mammals and to quantify associated levels of uncertainty. However, they can often require subjective decisions that have important consequences for management recommendations, and the basis for these decisions must be clearly described.The classification of partition homogeneous groups with applications to semigroup theory
http://hdl.handle.net/10023/10228
Let λ=(λ1,λ2,...) be a partition of n, a sequence of positive integers in non-increasing order with sum n. Let Ω:={1,...,n}. An ordered partition P=(A1,A2,...) of Ω has type λ if |Ai|=λi.Following Martin and Sagan, we say that G is λ-transitive if, for any two ordered partitions P=(A1,A2,...) and Q=(B1,B2,...) of Ω of type λ, there exists g ∈ G with Aig=Bi for all i. A group G is said to be λ-homogeneous if, given two ordered partitions P and Q as above, inducing the sets P'={A1,A2,...} and Q'={B1,B2,...}, there exists g ∈ G such that P'g=Q'. Clearly a λ-transitive group is λ-homogeneous.The first goal of this paper is to classify the λ-homogeneous groups (Theorems 1.1 and 1.2). The second goal is to apply this classification to a problem in semigroup theory.Let Tn and Sn denote the transformation monoid and the symmetric group on Ω, respectively. Fix a group H<=Sn. Given a non-invertible transformation a in Tn-Sn and a group G<=Sn, we say that (a,G) is an H-pair if the semigroups generated by {a} ∪ H and {a} ∪ G contain the same non-units, that is, {a,G}\G= {a,H}\H. Using the classification of the λ-homogeneous groups we classify all the Sn-pairs (Theorem 1.8). For a multitude of transformation semigroups this theorem immediately implies a description of their automorphisms, congruences, generators and other relevant properties (Theorem 8.5). This topic involves both group theory and semigroup theory; we have attempted to include enough exposition to make the paper self-contained for researchers in both areas. The paper finishes with a number of open problems on permutation and linear groups.
Fri, 15 Apr 2016 00:00:00 GMThttp://hdl.handle.net/10023/102282016-04-15T00:00:00ZAndré, JorgeAraúo, JoāoCameron, Peter JephsonLet λ=(λ1,λ2,...) be a partition of n, a sequence of positive integers in non-increasing order with sum n. Let Ω:={1,...,n}. An ordered partition P=(A1,A2,...) of Ω has type λ if |Ai|=λi.Following Martin and Sagan, we say that G is λ-transitive if, for any two ordered partitions P=(A1,A2,...) and Q=(B1,B2,...) of Ω of type λ, there exists g ∈ G with Aig=Bi for all i. A group G is said to be λ-homogeneous if, given two ordered partitions P and Q as above, inducing the sets P'={A1,A2,...} and Q'={B1,B2,...}, there exists g ∈ G such that P'g=Q'. Clearly a λ-transitive group is λ-homogeneous.The first goal of this paper is to classify the λ-homogeneous groups (Theorems 1.1 and 1.2). The second goal is to apply this classification to a problem in semigroup theory.Let Tn and Sn denote the transformation monoid and the symmetric group on Ω, respectively. Fix a group H<=Sn. Given a non-invertible transformation a in Tn-Sn and a group G<=Sn, we say that (a,G) is an H-pair if the semigroups generated by {a} ∪ H and {a} ∪ G contain the same non-units, that is, {a,G}\G= {a,H}\H. Using the classification of the λ-homogeneous groups we classify all the Sn-pairs (Theorem 1.8). For a multitude of transformation semigroups this theorem immediately implies a description of their automorphisms, congruences, generators and other relevant properties (Theorem 8.5). This topic involves both group theory and semigroup theory; we have attempted to include enough exposition to make the paper self-contained for researchers in both areas. The paper finishes with a number of open problems on permutation and linear groups.An assessment of the population of cotton-top tamarins (Saguinus oedipus) and their habitat in Colombia
http://hdl.handle.net/10023/10100
Numerous animals have declining populations due to habitat loss, illegal wildlife trade, and climate change. The cotton-top tamarin (Saguinus oedipus) is a Critically Endangered primate species, endemic to northwest Colombia, threatened by deforestation and illegal trade. In order to assess the current state of this species, we analyzed changes in the population of cotton-top tamarins and its habitat from 2005 to 2012. We used a tailor-made "lure strip transect" method to survey 43 accessible forest parcels that represent 30% of the species' range. Estimated population size in the surveyed region was approximately 2,050 in 2005 and 1,900 in 2012, with a coefficient of variation of approximately 10%. The estimated population change between surveys was -7% (a decline of approximately 1.3% per year) suggesting a relatively stable population. If densities of inaccessible forest parcels are similar to those of surveyed samples, the estimated population of cotton-top tamarins in the wild in 2012 was 6,946 individuals. We also recorded little change in the amount of suitable habitat for cotton-top tamarins between sample periods: in 2005, 18% of surveyed forest was preferred habitat for cotton-top tamarins, while in 2012, 17% percent was preferred. We attribute the relatively stable population of this Critically Endangered species to increased conservation efforts of Proyecto Tití, conservation NGOs, and the Colombian government. Due to continued threats to cotton-top tamarins and their habitat such as agriculture and urban expansion, ongoing conservation efforts are needed to ensure the long-term survival of cotton-top tamarins in Colombia.
Wed, 28 Dec 2016 00:00:00 GMThttp://hdl.handle.net/10023/101002016-12-28T00:00:00ZSavage, AnneThomas, LenFeilen, Katie L.Kidney, DarrenSoto, Luis H.Pearson, MackenzieMedina, Felix S.Emeris, GermanGuillen, Rosamira R.Numerous animals have declining populations due to habitat loss, illegal wildlife trade, and climate change. The cotton-top tamarin (Saguinus oedipus) is a Critically Endangered primate species, endemic to northwest Colombia, threatened by deforestation and illegal trade. In order to assess the current state of this species, we analyzed changes in the population of cotton-top tamarins and its habitat from 2005 to 2012. We used a tailor-made "lure strip transect" method to survey 43 accessible forest parcels that represent 30% of the species' range. Estimated population size in the surveyed region was approximately 2,050 in 2005 and 1,900 in 2012, with a coefficient of variation of approximately 10%. The estimated population change between surveys was -7% (a decline of approximately 1.3% per year) suggesting a relatively stable population. If densities of inaccessible forest parcels are similar to those of surveyed samples, the estimated population of cotton-top tamarins in the wild in 2012 was 6,946 individuals. We also recorded little change in the amount of suitable habitat for cotton-top tamarins between sample periods: in 2005, 18% of surveyed forest was preferred habitat for cotton-top tamarins, while in 2012, 17% percent was preferred. We attribute the relatively stable population of this Critically Endangered species to increased conservation efforts of Proyecto Tití, conservation NGOs, and the Colombian government. Due to continued threats to cotton-top tamarins and their habitat such as agriculture and urban expansion, ongoing conservation efforts are needed to ensure the long-term survival of cotton-top tamarins in Colombia.On optimality and construction of circular repeated-measurements designs
http://hdl.handle.net/10023/10092
The aim of this paper is to characterize and construct universally optimal designs among the class of circular repeated-measurements designs when the parameters do not permit balance for carry-over effects. It is shown that some circular weakly neighbour balanced designs defined by Filipiak and Markiewicz (2012) are universally optimal repeated-measurements designs. These results extend the work of Magda (1980), Kunert (1984b) and Filipiak and Markiewicz (2012).
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/100922017-01-01T00:00:00ZBailey, Rosemary AnneCameron, Peter JephsonFilipiak, KatarzynaKunert, JoachimMarkiewicz, AugustynThe aim of this paper is to characterize and construct universally optimal designs among the class of circular repeated-measurements designs when the parameters do not permit balance for carry-over effects. It is shown that some circular weakly neighbour balanced designs defined by Filipiak and Markiewicz (2012) are universally optimal repeated-measurements designs. These results extend the work of Magda (1980), Kunert (1984b) and Filipiak and Markiewicz (2012).Estimability of variance components when all model matrices commute
http://hdl.handle.net/10023/9983
This paper deals with estimability of variance components in mixed models when all model matrices commute. In this situation, it is well known that the best linear unbiased estimators of fixed effects are the ordinary least squares estimators. If, in addition, the family of possible variance-covariance matrices forms an orthogonal block structure, then there are the same number of variance components as strata, and the variance components are all estimable if and only if there are non-zero residual degrees of freedom in each stratum. We investigate the case where the family of possible variance-covariance matrices, while still commutative, no longer forms an orthogonal block structure. Now the variance components may or may not all be estimable, but there is no clear link with residual degrees of freedom. Whether or not they are all estimable, there may or may not be uniformly best unbiased quadratic estimators of those that are estimable. Examples are given to demonstrate all four possibilities.
This work was partially supported by national funds of FCT - Foundation for Science and Technology under UID/MAT/00212/2013.
Tue, 01 Mar 2016 00:00:00 GMThttp://hdl.handle.net/10023/99832016-03-01T00:00:00ZBailey, Rosemary AnneFerreira, Sandra S.Ferreira, DarioNunes, CeliaThis paper deals with estimability of variance components in mixed models when all model matrices commute. In this situation, it is well known that the best linear unbiased estimators of fixed effects are the ordinary least squares estimators. If, in addition, the family of possible variance-covariance matrices forms an orthogonal block structure, then there are the same number of variance components as strata, and the variance components are all estimable if and only if there are non-zero residual degrees of freedom in each stratum. We investigate the case where the family of possible variance-covariance matrices, while still commutative, no longer forms an orthogonal block structure. Now the variance components may or may not all be estimable, but there is no clear link with residual degrees of freedom. Whether or not they are all estimable, there may or may not be uniformly best unbiased quadratic estimators of those that are estimable. Examples are given to demonstrate all four possibilities.Extinction is imminent for Mexico’s endemic porpoise unless fishery bycatch is eliminated
http://hdl.handle.net/10023/9938
The number of Mexico’s endemic porpoise, the vaquita (Phocoena sinus), is collapsing primarily due to bycatch in illegal gillnets set for totoaba (Totoaba macdonaldi), an endangered fish whose swim bladders are exported to China. Previous research estimated that vaquitas declined from about 567 to 245 individuals between 1997 and 2008. Acoustic monitoring between 2011 and 2015 showed a decline of 34%/year. Here, we combine visual line transect and passive acoustic data collected simultaneously in a robust spatial analysis to estimate that only 59 (95% Bayesian Credible Interval [CRI] 22 – 145) vaquita remained as of autumn 2015, a decrease since 1997 of 92% (95% CRI 80%-97%). Risk analysis suggests that if the current, temporary gillnet ban is maintained and effectively enforced, vaquitas could recover to 2008 population levels by 2050. Otherwise, the species is likely to be extinct within a decade.
Primary funding was by Secretaria del Medio Ambiente y Recursos Naturales (Secretario R. Pacchiano). Mexican support was from SEMARNAT, CONABIO, CONANP, PROFEPA, SEMAR, and WWF-Mexico. US support from NOAA-Fisheries-SWFSC and The Marine Mammal Center.
Thu, 12 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/99382017-10-12T00:00:00ZTaylor, Barbara L.Rojas-Bracho, LorenzoMoore, JeffreyJaramillo-Legorreta, ArmandoVer Hoef, Jay M.Cardenas-Hinojosa, GustavoNieto-Garcia, EdwynaBarlow, JayGerrodette, TimTregenza, NicholasThomas, LenHammond, Philip S.The number of Mexico’s endemic porpoise, the vaquita (Phocoena sinus), is collapsing primarily due to bycatch in illegal gillnets set for totoaba (Totoaba macdonaldi), an endangered fish whose swim bladders are exported to China. Previous research estimated that vaquitas declined from about 567 to 245 individuals between 1997 and 2008. Acoustic monitoring between 2011 and 2015 showed a decline of 34%/year. Here, we combine visual line transect and passive acoustic data collected simultaneously in a robust spatial analysis to estimate that only 59 (95% Bayesian Credible Interval [CRI] 22 – 145) vaquita remained as of autumn 2015, a decrease since 1997 of 92% (95% CRI 80%-97%). Risk analysis suggests that if the current, temporary gillnet ban is maintained and effectively enforced, vaquitas could recover to 2008 population levels by 2050. Otherwise, the species is likely to be extinct within a decade.Passive acoustic monitoring of the decline of Mexico's critically endangered vaquita
http://hdl.handle.net/10023/9937
The vaquita (Phocoena sinus) is the world's most endangered marine mammal with ≈245 individuals remaining in 2008. This species of porpoise is endemic to the northern Gulf of California, Mexico, and has historically suffered population declines from unsustainable bycatch in gillnets. An illegal gillnet fishery for an endangered fish, the totoaba (Totoaba macdonaldi), has recently resurged throughout the vaquita's range. The secretive but lucrative wildlife trade with China for totoaba swim bladders has probably increased vaquita bycatch mortality, but by an unknown amount. Precise population monitoring by visual surveys is difficult because vaquitas are inherently hard to see and have now become so rare that sighting rates are very low. However, their echolocation clicks can be identified readily on specialized acoustic detectors. Acoustic detections on an array of 46 moored detectors indicate that vaquita acoustic activity declined by 80% between 2011 and 2015 in the central part of the species’ range. Statistical models estimate an annual rate of decline of 34% (95% Bayesian Credible Interval -48% to -21%). Based on preliminary acoustic monitoring results from 2011–2014 the Government of Mexico enacted and is enforcing an emergency 2-year ban of gillnets throughout the species’ range to prevent extinction, at a cost of $74 million USD to compensate fishers. Developing precise acoustic monitoring methods proved critical to exposing the severity of vaquitas’ decline and emphasizes the need for continual monitoring to effectively manage critically endangered species.
Different institutions and agencies have provided funding during the development and implementation of the acoustic monitoring program.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/99372017-02-01T00:00:00ZJaramillo-Legorreta, ArmandoCardenas-Hinojosa, GustavoNieto-Garcia, EdwynaRojas-Bracho, LorenzoHoef, Jay VerMoore, JeffreyTregenza, NicholasBarlow, JayGerrodette, TimThomas, LenTaylor, BarbaraThe vaquita (Phocoena sinus) is the world's most endangered marine mammal with ≈245 individuals remaining in 2008. This species of porpoise is endemic to the northern Gulf of California, Mexico, and has historically suffered population declines from unsustainable bycatch in gillnets. An illegal gillnet fishery for an endangered fish, the totoaba (Totoaba macdonaldi), has recently resurged throughout the vaquita's range. The secretive but lucrative wildlife trade with China for totoaba swim bladders has probably increased vaquita bycatch mortality, but by an unknown amount. Precise population monitoring by visual surveys is difficult because vaquitas are inherently hard to see and have now become so rare that sighting rates are very low. However, their echolocation clicks can be identified readily on specialized acoustic detectors. Acoustic detections on an array of 46 moored detectors indicate that vaquita acoustic activity declined by 80% between 2011 and 2015 in the central part of the species’ range. Statistical models estimate an annual rate of decline of 34% (95% Bayesian Credible Interval -48% to -21%). Based on preliminary acoustic monitoring results from 2011–2014 the Government of Mexico enacted and is enforcing an emergency 2-year ban of gillnets throughout the species’ range to prevent extinction, at a cost of $74 million USD to compensate fishers. Developing precise acoustic monitoring methods proved critical to exposing the severity of vaquitas’ decline and emphasizes the need for continual monitoring to effectively manage critically endangered species.The challenges of analyzing behavioral response study data : an overview of the MOCHA (Multi-study OCean acoustics Human effects Analysis) project
http://hdl.handle.net/10023/9923
This paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.
Date of Acceptance:
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/99232016-01-01T00:00:00ZHarris, Catriona MThomas, LenSadykova, DinaraDe Ruiter, Stacy LynnTyack, Peter LloydSouthall, Brandon L.Read, Andrew J.Miller, PatrickThis paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.String C-groups as transitive subgroups of Sn
http://hdl.handle.net/10023/9794
If Γ is a string C-group which is isomorphic to a transitive subgroup of the symmetric group Sn (other than Sn and the alternating group An), then the rank of Γ is at most n/2+1, with finitely many exceptions (which are classified). It is conjectured that only the symmetric group has to be excluded.
Mon, 01 Feb 2016 00:00:00 GMThttp://hdl.handle.net/10023/97942016-02-01T00:00:00ZCameron, Peter JephsonFernandes, Maria ElisaLeemans, DimitriMixer, MarkIf Γ is a string C-group which is isomorphic to a transitive subgroup of the symmetric group Sn (other than Sn and the alternating group An), then the rank of Γ is at most n/2+1, with finitely many exceptions (which are classified). It is conjectured that only the symmetric group has to be excluded.Habitat complexity in aquatic microcosms affects processes driven by detrivores
http://hdl.handle.net/10023/9749
Habitat complexity can influence predation rates (e.g. by providing refuge) but other ecosystem processes and species interactions might also be modulated by the properties of habitat structure. Here, we focussed on how complexity of artificial habitat (plastic plants), in microcosms, influenced short-term processes driven by three aquatic detritivores. The effects of habitat complexity on leaf decomposition, production of fine organic matter and pH levels were explored by measuring complexity in three ways: 1. as the presence vs. absence of habitat structure; 2. as the amount of structure (3 or 4.5 g of plastic plants); and 3. as the spatial configuration of structures (measured as fractal dimension). The experiment also addressed potential interactions among the consumers by running all possible species combinations. In the experimental microcosms, habitat complexity influenced how species performed, especially when comparing structure present vs. structure absent. Treatments with structure showed higher fine particulate matter production and lower pH compared to treatments without structures and this was probably due to higher digestion and respiration when structures were present. When we explored the effects of the different complexity levels, we found that the amount of structure added explained more than the fractal dimension of the structures. We give a detailed overview of the experimental design, statistical models and R codes, because our statistical analysis can be applied to other study systems (and disciplines such as restoration ecology). We further make suggestions of how to optimise statistical power when artificially assembling, and analysing, ‘habitat complexity’ by not confounding complexity with the amount of structure added. In summary, this study highlights the importance of habitat complexity for energy flow and the maintenance of ecosystem processes in aquatic ecosystems.
LF was supported in part by the Spanish Ministry of Economy and Competitiveness through the project SCARCE Consolider-Ingenio CSD2009-00065.
Tue, 01 Nov 2016 00:00:00 GMThttp://hdl.handle.net/10023/97492016-11-01T00:00:00ZFlores, LoreaBailey, R. A.Elosegi, ArturoLarrañaga, AitorReiss, JuliaHabitat complexity can influence predation rates (e.g. by providing refuge) but other ecosystem processes and species interactions might also be modulated by the properties of habitat structure. Here, we focussed on how complexity of artificial habitat (plastic plants), in microcosms, influenced short-term processes driven by three aquatic detritivores. The effects of habitat complexity on leaf decomposition, production of fine organic matter and pH levels were explored by measuring complexity in three ways: 1. as the presence vs. absence of habitat structure; 2. as the amount of structure (3 or 4.5 g of plastic plants); and 3. as the spatial configuration of structures (measured as fractal dimension). The experiment also addressed potential interactions among the consumers by running all possible species combinations. In the experimental microcosms, habitat complexity influenced how species performed, especially when comparing structure present vs. structure absent. Treatments with structure showed higher fine particulate matter production and lower pH compared to treatments without structures and this was probably due to higher digestion and respiration when structures were present. When we explored the effects of the different complexity levels, we found that the amount of structure added explained more than the fractal dimension of the structures. We give a detailed overview of the experimental design, statistical models and R codes, because our statistical analysis can be applied to other study systems (and disciplines such as restoration ecology). We further make suggestions of how to optimise statistical power when artificially assembling, and analysing, ‘habitat complexity’ by not confounding complexity with the amount of structure added. In summary, this study highlights the importance of habitat complexity for energy flow and the maintenance of ecosystem processes in aquatic ecosystems.Bayesian multi-species modelling of non-negative continuous ecological data with a discrete mass at zero
http://hdl.handle.net/10023/9626
Severe declines in the number of some songbirds over the last 40 years
have caused heated debate amongst interested parties. Many factors
have been suggested as possible causes for these declines, including
an increase in the abundance and distribution of an avian predator,
the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a
predator effect on the abundance of its prey, we analyse data on 10
species visiting garden bird feeding stations monitored by the British
Trust for Ornithology in relation to the abundance of sparrowhawks.
We apply Bayesian hierarchical models to data relating to averaged
maximum weekly counts from a garden bird monitoring survey. These
data are essentially continuous, bounded below by zero, but for many
species show a marked spike at zero that many standard distributions
would not be able to account for. We use the Tweedie distributions,
which for certain areas of parameter space relate to continuous nonnegative
distributions with a discrete probability mass at zero, and
are hence able to deal with the shape of the empirical distributions of
the data.
The methods developed in this thesis begin by modelling single prey
species independently with an avian predator as a covariate, using
MCMC methods to explore parameter and model spaces. This model
is then extended to a multiple-prey species model, testing for interactions
between species as well as synchrony in their response to environmental
factors and unobserved variation.
Finally we use a relatively new methodological framework, namely
the SPDE approach in the INLA framework, to fit a multi-species
spatio-temporal model to the ecological data.
The results from the analyses are consistent with the hypothesis that
sparrowhawks are suppressing the numbers of some species of birds
visiting garden feeding stations. Only the species most susceptible to
sparrowhawk predation seem to be affected.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/96262015-01-01T00:00:00ZSwallow, BenSevere declines in the number of some songbirds over the last 40 years
have caused heated debate amongst interested parties. Many factors
have been suggested as possible causes for these declines, including
an increase in the abundance and distribution of an avian predator,
the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a
predator effect on the abundance of its prey, we analyse data on 10
species visiting garden bird feeding stations monitored by the British
Trust for Ornithology in relation to the abundance of sparrowhawks.
We apply Bayesian hierarchical models to data relating to averaged
maximum weekly counts from a garden bird monitoring survey. These
data are essentially continuous, bounded below by zero, but for many
species show a marked spike at zero that many standard distributions
would not be able to account for. We use the Tweedie distributions,
which for certain areas of parameter space relate to continuous nonnegative
distributions with a discrete probability mass at zero, and
are hence able to deal with the shape of the empirical distributions of
the data.
The methods developed in this thesis begin by modelling single prey
species independently with an avian predator as a covariate, using
MCMC methods to explore parameter and model spaces. This model
is then extended to a multiple-prey species model, testing for interactions
between species as well as synchrony in their response to environmental
factors and unobserved variation.
Finally we use a relatively new methodological framework, namely
the SPDE approach in the INLA framework, to fit a multi-species
spatio-temporal model to the ecological data.
The results from the analyses are consistent with the hypothesis that
sparrowhawks are suppressing the numbers of some species of birds
visiting garden feeding stations. Only the species most susceptible to
sparrowhawk predation seem to be affected.Effects of a scientific echo sounder on the behavior of short-finned pilot whales (Globicephala macrorhynchus)
http://hdl.handle.net/10023/9555
Active echo sounding devices are often employed for commercial or scientific purposes in the foraging habitats of marine mammals. We conducted an experiment off Cape Hatteras, North Carolina, USA, to assess whether the behavior of short-finned pilot whales (Globicephala macrorhynchus) changed when exposed to an EK60 scientific echo sounder. We attached digital acoustic recording tags (DTAGs) to nine individuals, five of which were exposed. A hidden Markov model to characterize diving states with and without exposure provided no evidence for a change in foraging behavior. However, generalized estimating equations to model changes in heading variance over the entire tag record under all experimental conditions showed a consistent increase in heading variance during exposure over all values of depth and pitch. This suggests that regardless of behavioral state, the whales changed their heading more frequently when the echo sounder was active. This response could represent increased vigilance in which whales maintained awareness of echo sounder location by increasing their heading variance and provides the first quantitative analysis on reactions of cetaceans to a scientific echo sounder.
This work was supported by award RC-2154 from the Strategic Environmental Research and Development Program and funding from the Naval Facilities Engineering Command Atlantic and NOAA Fisheries, Southeast Region.
Mon, 01 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/95552017-05-01T00:00:00ZQuick, NicolaScott-Hayward, LindesaySadykova, DinaraNowacek, DougRead, AndrewActive echo sounding devices are often employed for commercial or scientific purposes in the foraging habitats of marine mammals. We conducted an experiment off Cape Hatteras, North Carolina, USA, to assess whether the behavior of short-finned pilot whales (Globicephala macrorhynchus) changed when exposed to an EK60 scientific echo sounder. We attached digital acoustic recording tags (DTAGs) to nine individuals, five of which were exposed. A hidden Markov model to characterize diving states with and without exposure provided no evidence for a change in foraging behavior. However, generalized estimating equations to model changes in heading variance over the entire tag record under all experimental conditions showed a consistent increase in heading variance during exposure over all values of depth and pitch. This suggests that regardless of behavioral state, the whales changed their heading more frequently when the echo sounder was active. This response could represent increased vigilance in which whales maintained awareness of echo sounder location by increasing their heading variance and provides the first quantitative analysis on reactions of cetaceans to a scientific echo sounder.Lengths of words in transformation semigroups generated by digraphs
http://hdl.handle.net/10023/9277
Given a simple digraph D on n vertices (with n≥2), there is a natural construction of a semigroup of transformations ⟨D⟩. For any edge (a, b) of D, let a→b be the idempotent of rank n−1 mapping a to b and fixing all vertices other than a; then, define ⟨D⟩ to be the semigroup generated by a→b for all (a,b)∈E(D). For α∈⟨D⟩, let ℓ(D,α) be the minimal length of a word in E(D) expressing α. It is well known that the semigroup Singn of all transformations of rank at most n−1 is generated by its idempotents of rank n−1. When D=Kn is the complete undirected graph, Howie and Iwahori, independently, obtained a formula to calculate ℓ(Kn,α), for any α∈⟨Kn⟩=Singn; however, no analogous non-trivial results are known when D≠Kn. In this paper, we characterise all simple digraphs D such that either ℓ(D,α) is equal to Howie–Iwahori’s formula for all α∈⟨D⟩, or ℓ(D,α)=n−fix(α) for all α∈⟨D⟩, or ℓ(D,α)=n−rk(α) for all α∈⟨D⟩. We also obtain bounds for ℓ(D,α) when D is an acyclic digraph or a strong tournament (the latter case corresponds to a smallest generating set of idempotents of rank n−1 of Singn). We finish the paper with a list of conjectures and open problems
The second and third authors were supported by the EPSRC grant EP/K033956/1.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/92772017-02-01T00:00:00ZCameron, P. J.Castillo-Ramirez, A.Gadouleau, M.Mitchell, J. D.Given a simple digraph D on n vertices (with n≥2), there is a natural construction of a semigroup of transformations ⟨D⟩. For any edge (a, b) of D, let a→b be the idempotent of rank n−1 mapping a to b and fixing all vertices other than a; then, define ⟨D⟩ to be the semigroup generated by a→b for all (a,b)∈E(D). For α∈⟨D⟩, let ℓ(D,α) be the minimal length of a word in E(D) expressing α. It is well known that the semigroup Singn of all transformations of rank at most n−1 is generated by its idempotents of rank n−1. When D=Kn is the complete undirected graph, Howie and Iwahori, independently, obtained a formula to calculate ℓ(Kn,α), for any α∈⟨Kn⟩=Singn; however, no analogous non-trivial results are known when D≠Kn. In this paper, we characterise all simple digraphs D such that either ℓ(D,α) is equal to Howie–Iwahori’s formula for all α∈⟨D⟩, or ℓ(D,α)=n−fix(α) for all α∈⟨D⟩, or ℓ(D,α)=n−rk(α) for all α∈⟨D⟩. We also obtain bounds for ℓ(D,α) when D is an acyclic digraph or a strong tournament (the latter case corresponds to a smallest generating set of idempotents of rank n−1 of Singn). We finish the paper with a list of conjectures and open problemsModeling the aggregated exposure and responses of bowhead whales Balaena mysticetus to multiple sources of anthropogenic underwater sound
http://hdl.handle.net/10023/9259
Potential responses of marine mammals to anthropogenic underwater sound are usually assessed by researchers and regulators on the basis of exposure to a single, relatively loud sound source. However, marine mammals typically receive sounds from multiple, dynamic sources. We developed a method to aggregate modeled sounds from multiple sources and estimate the sound levels received by individuals. To illustrate the method, we modeled the sound fields of 9 sources associated with oil development and estimated the sound received over 47 d by a population of 10 000 simulated bowhead whales Balaena mysticetus on their annual migration through the Alaskan Beaufort Sea. Empirical data were sufficient to parameterize simulations of the distribution of individual whales over time and their range of movement patterns. We ran 2 simulations to estimate the sound exposure history and distances traveled by bowhead whales: one in which they could change their movement paths (avert) in response to set levels of sound and one in which they could not avert. When animals could not avert, about 2% of the simulated population was exposed to root mean square (rms) sound pressure levels (SPL) ≥ 180 dB re 1 mu Pa, a level that regulators in the U.S. often associate with injury. When animals could avert from sound levels that regulators often associate with behavioral disturbance (rms SPL > 160 dB re 1 μPa), <1% of the simulated population was exposed to levels associated with injury. Nevertheless, many simulated bowhead whales received sound levels considerably above ambient throughout their migration. Our method enables estimates of the aggregated level of sound to which populations are exposed over extensive areas and time periods.
This work was supported in part by a contract between BP Exploration (Alaska) Inc. and the University of California, Santa Barbara (E.F.), and by the North Slope Borough.
Mon, 02 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/92592016-05-02T00:00:00ZEllison, William T.Racca, RobertoClark, Christopher W.Streever, BillFrankel, Adam S.Fleishman, EricaAngliss, RobynBerger, JoelKetten, DarleneGuerra, MelaniaLeu, MatthiasMcKenna, MeganSformo, ToddSouthall, BrandonSuydam, RobertThomas, LenPotential responses of marine mammals to anthropogenic underwater sound are usually assessed by researchers and regulators on the basis of exposure to a single, relatively loud sound source. However, marine mammals typically receive sounds from multiple, dynamic sources. We developed a method to aggregate modeled sounds from multiple sources and estimate the sound levels received by individuals. To illustrate the method, we modeled the sound fields of 9 sources associated with oil development and estimated the sound received over 47 d by a population of 10 000 simulated bowhead whales Balaena mysticetus on their annual migration through the Alaskan Beaufort Sea. Empirical data were sufficient to parameterize simulations of the distribution of individual whales over time and their range of movement patterns. We ran 2 simulations to estimate the sound exposure history and distances traveled by bowhead whales: one in which they could change their movement paths (avert) in response to set levels of sound and one in which they could not avert. When animals could not avert, about 2% of the simulated population was exposed to root mean square (rms) sound pressure levels (SPL) ≥ 180 dB re 1 mu Pa, a level that regulators in the U.S. often associate with injury. When animals could avert from sound levels that regulators often associate with behavioral disturbance (rms SPL > 160 dB re 1 μPa), <1% of the simulated population was exposed to levels associated with injury. Nevertheless, many simulated bowhead whales received sound levels considerably above ambient throughout their migration. Our method enables estimates of the aggregated level of sound to which populations are exposed over extensive areas and time periods.PReMiuM : an R package for profile regression mixture models using Dirichlet processes
http://hdl.handle.net/10023/8931
PReMiuM is a recently developed R package for Bayesian clustering using a Dirichlet process mixture model. This model is an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership (Molitor, Papathomas, Jerrett, and Richardson 2010). The package allows binary, categorical, count and continuous response, as well as continuous and discrete covariates. Additionally, predictions may be made for the response, and missing values for the covariates are handled. Several samplers and label switching moves are implemented along with diagnostic tools to assess convergence. A number of R functions for post-processing of the output are also provided. In addition to tting mixtures, it may additionally be of interest to determine which covariates actively drive the mixture components. This is implemented in the package as variable selection.
Fri, 20 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/89312015-03-20T00:00:00ZLiverani, SilviaHastie, DavidAzizi, LamiaePapathomas, MichailRichardson, SylviaPReMiuM is a recently developed R package for Bayesian clustering using a Dirichlet process mixture model. This model is an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership (Molitor, Papathomas, Jerrett, and Richardson 2010). The package allows binary, categorical, count and continuous response, as well as continuous and discrete covariates. Additionally, predictions may be made for the response, and missing values for the covariates are handled. Several samplers and label switching moves are implemented along with diagnostic tools to assess convergence. A number of R functions for post-processing of the output are also provided. In addition to tting mixtures, it may additionally be of interest to determine which covariates actively drive the mixture components. This is implemented in the package as variable selection.Avoidance of wind farms by harbour seals is limited to pile driving activities
http://hdl.handle.net/10023/8856
1. As part of global efforts to reduce dependence on carbon-based energy sources there has been a rapid increase in the installation of renewable energy devices. The installation and operation of these devices can result in conflicts with wildlife. In the marine environment, mammals may avoid wind farms that are under construction or operating. Such avoidance may lead to more time spent travelling or displacement from key habitats. A paucity of data on at-sea movements of marine mammals around wind farms limits our understanding of the nature of their potential impacts. 2. Here, we present the results of a telemetry study on harbour seals Phoca vitulina in The Wash, south-east England, an area where wind farms are being constructed using impact pile driving. We investigated whether seals avoid wind farms during operation, construction in its entirety, or during piling activity. The study was carried out using historical telemetry data collected prior to any wind farm development and telemetry data collected in 2012 during the construction of one wind farm and the operation of another. 3. Within an operational wind farm, there was a close-to-significant increase in seal usage compared to prior to wind farm development. However, the wind farm was at the edge of a large area of increased usage, so the presence of the wind farm was unlikely to be the cause. 4. There was no significant displacement during construction as a whole. However, during piling, seal usage (abundance) was significantly reduced up to 25 km from the piling activity; within 25 km of the centre of the wind farm, there was a 19 to 83% (95% confidence intervals) decrease in usage compared to during breaks in piling, equating to a mean estimated displacement of 440 individuals. This amounts to significant displacement starting from predicted received levels of between 166 and 178 dB re 1 μPa(p·p). Displacement was limited to piling activity; within 2 h of cessation of pile driving, seals were distributed as per the non-piling scenario. 5. Synthesis and applications. Our spatial and temporal quantification of avoidance of wind farms by harbour seals is critical to reduce uncertainty and increase robustness in environmental impact assessments of future developments. Specifically, the results will allow policymakers to produce industry guidance on the likelihood of displacement of seals in response to pile driving; the relationship between sound levels and avoidance rates; and the duration of any avoidance, thus allowing far more accurate environmental assessments to be carried out during the consenting process. Further, our results can be used to inform mitigation strategies in terms of both the sound levels likely to cause displacement and what temporal patterns of piling would minimize the magnitude of the energetic impacts of displacement.
DJFR, GH, VMJ and BM were funded by the UK Department of Energy and Climate Change (DECC) as part of their Offshore Energy Strategic Environmental Assessment programme. DT and GH were also funded by NERC/Defra EBAO NE/J004243/1. ELJ was funded under Scottish Government grant MMSS001/01. This work was also supported by National Capability funding from the Natural Environment Research Council to SMRU (grant no. SMRU1001). Tags and their deployment in the Thames in 2006 and The Wash were funded by DECC. Tags and their deployment in the Thames in 2012 were commissioned by Zoological Society London, with funding from BBC Wildlife Fund and Sita Trust.
Thu, 01 Dec 2016 00:00:00 GMThttp://hdl.handle.net/10023/88562016-12-01T00:00:00ZRussell, Deborah J. F.Hastie, Gordon D.Thompson, DavidJanik, Vincent M.Hammond, Philip S.Scott-Hayward, Lindesay A. S.Matthiopoulos, JasonJones, Esther L.McConnell, Bernie J.1. As part of global efforts to reduce dependence on carbon-based energy sources there has been a rapid increase in the installation of renewable energy devices. The installation and operation of these devices can result in conflicts with wildlife. In the marine environment, mammals may avoid wind farms that are under construction or operating. Such avoidance may lead to more time spent travelling or displacement from key habitats. A paucity of data on at-sea movements of marine mammals around wind farms limits our understanding of the nature of their potential impacts. 2. Here, we present the results of a telemetry study on harbour seals Phoca vitulina in The Wash, south-east England, an area where wind farms are being constructed using impact pile driving. We investigated whether seals avoid wind farms during operation, construction in its entirety, or during piling activity. The study was carried out using historical telemetry data collected prior to any wind farm development and telemetry data collected in 2012 during the construction of one wind farm and the operation of another. 3. Within an operational wind farm, there was a close-to-significant increase in seal usage compared to prior to wind farm development. However, the wind farm was at the edge of a large area of increased usage, so the presence of the wind farm was unlikely to be the cause. 4. There was no significant displacement during construction as a whole. However, during piling, seal usage (abundance) was significantly reduced up to 25 km from the piling activity; within 25 km of the centre of the wind farm, there was a 19 to 83% (95% confidence intervals) decrease in usage compared to during breaks in piling, equating to a mean estimated displacement of 440 individuals. This amounts to significant displacement starting from predicted received levels of between 166 and 178 dB re 1 μPa(p·p). Displacement was limited to piling activity; within 2 h of cessation of pile driving, seals were distributed as per the non-piling scenario. 5. Synthesis and applications. Our spatial and temporal quantification of avoidance of wind farms by harbour seals is critical to reduce uncertainty and increase robustness in environmental impact assessments of future developments. Specifically, the results will allow policymakers to produce industry guidance on the likelihood of displacement of seals in response to pile driving; the relationship between sound levels and avoidance rates; and the duration of any avoidance, thus allowing far more accurate environmental assessments to be carried out during the consenting process. Further, our results can be used to inform mitigation strategies in terms of both the sound levels likely to cause displacement and what temporal patterns of piling would minimize the magnitude of the energetic impacts of displacement.An efficient acoustic density estimation method with human detectors applied to gibbons in Cambodia
http://hdl.handle.net/10023/8842
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.
D. Kidney was supported by an Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Grant studentship (EPSRC grant EP/P505097/1). B. Stevenson was supported by a studentship jointly funded by the University of St Andrews and EPSRC, through the National Centre for Statistical Ecology (EPSRC grant EP/I000917/1).
Thu, 19 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/88422016-05-19T00:00:00ZKidney, DarrenRawson, Benjamin M.Borchers, David LouisStevenson, BenMarques, Tiago A.Thomas, LenSome animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.Gauging allowable harm limits to cumulative, sub-lethal effects of human activities on wildlife : a case-study approach using two whale populations
http://hdl.handle.net/10023/8716
As sublethal human pressures on marine wildlife and their habitats increase and interact in complex ways, there is a pressing need for methods to quantify cumulative impacts of these stressors on populations, and policy decisions about allowable harm limits. Few studies quantify population consequences of individual stressors, and fewer quantify synergistic effects. Incorporating all sources of uncertainty can cause predictions to span the range from negligible to catastrophic. Two places were identified to bound this problem through energetic mechanisms that reduce prey available to individuals. First, the US Marine Mammal Protection Act's Potential Biological Removal (PBR) equation was used as a placeholder allowable harm limit to represent the number of animals that can be removed annually without depleting a population below agreed-upon management targets. That rephrased the research question from, “How big could cumulative impacts be?” to “How big would cumulative impacts have to be to exceed an agreed-upon threshold?” Secondly, two data-rich case studies, namely Gulf of Maine humpback and northeast Pacific resident killer whales, were used as examples to parameterize the weakest link, namely between prey availability and demography. Given no additional information, the model predicted that human activities need only reduce prey available to the killer whale population by ~10% to cause a population-level take, through reduced fecundity and/or survival, equivalent to PBR. By contrast, in the humpback population, reduction in prey availability of ~50% was needed to cause a similar, PBR-sized effect. The paper describes an approach – results are merely illustrative. The two case studies differ in prey specialization, life history, and, no doubt, proximity to carrying capacity. This method of inverting the problem refocuses discussions around what the level of prey depletion – via competition with commercial fisheries, displacement from feeding areas through noise-generating activities, or acoustic masking of signals used to detect prey – would have to occur to exceed allowable harm limits set for lethal takes in fisheries or other, more easily quantifiable, human activities.
Rob Williams was supported by a Marie Curie International Incoming Fellowship within the 7th European Community Framework Programme (Project CONCEAL, FP7, PIIF-GA-2009-253407).
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/10023/87162016-08-01T00:00:00ZWilliams, RobThomas, LenAshe, ErinClark, Christopher W.Hammond, Philip S.As sublethal human pressures on marine wildlife and their habitats increase and interact in complex ways, there is a pressing need for methods to quantify cumulative impacts of these stressors on populations, and policy decisions about allowable harm limits. Few studies quantify population consequences of individual stressors, and fewer quantify synergistic effects. Incorporating all sources of uncertainty can cause predictions to span the range from negligible to catastrophic. Two places were identified to bound this problem through energetic mechanisms that reduce prey available to individuals. First, the US Marine Mammal Protection Act's Potential Biological Removal (PBR) equation was used as a placeholder allowable harm limit to represent the number of animals that can be removed annually without depleting a population below agreed-upon management targets. That rephrased the research question from, “How big could cumulative impacts be?” to “How big would cumulative impacts have to be to exceed an agreed-upon threshold?” Secondly, two data-rich case studies, namely Gulf of Maine humpback and northeast Pacific resident killer whales, were used as examples to parameterize the weakest link, namely between prey availability and demography. Given no additional information, the model predicted that human activities need only reduce prey available to the killer whale population by ~10% to cause a population-level take, through reduced fecundity and/or survival, equivalent to PBR. By contrast, in the humpback population, reduction in prey availability of ~50% was needed to cause a similar, PBR-sized effect. The paper describes an approach – results are merely illustrative. The two case studies differ in prey specialization, life history, and, no doubt, proximity to carrying capacity. This method of inverting the problem refocuses discussions around what the level of prey depletion – via competition with commercial fisheries, displacement from feeding areas through noise-generating activities, or acoustic masking of signals used to detect prey – would have to occur to exceed allowable harm limits set for lethal takes in fisheries or other, more easily quantifiable, human activities.Randomization-based models for multitiered experiments: I. A chain of randomizations
http://hdl.handle.net/10023/8636
We derive randomization-based models for experiments with a chain of randomizations. Estimation theory for these models leads to formulae for the estimators of treatment effects, their standard errors, and expected mean squares in the analysis of variance. We discuss the practicalities in fitting these models and outline the difficulties that can occur, many of which do not arise in two-tiered experiments.
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/86362016-06-01T00:00:00ZBailey, Rosemary AnneBrien, C. J.We derive randomization-based models for experiments with a chain of randomizations. Estimation theory for these models leads to formulae for the estimators of treatment effects, their standard errors, and expected mean squares in the analysis of variance. We discuss the practicalities in fitting these models and outline the difficulties that can occur, many of which do not arise in two-tiered experiments.Constructing flag-transitive, point-imprimitive designs
http://hdl.handle.net/10023/8546
We give a construction of a family of designs with a specified point-partition and determine the subgroup of automorphisms leaving invariant the point-partition. We give necessary and sufficient conditions for a design in the family to possess a flag-transitive group of automorphisms preserving the specified point-partition. We give examples of flag-transitive designs in the family, including a new symmetric 2-(1408,336,80) design with automorphism group 2^12:((3⋅M22):2) and a construction of one of the families of the symplectic designs (the designs S^−(n) ) exhibiting a flag-transitive, point-imprimitive automorphism group.
Wed, 04 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/85462016-05-04T00:00:00ZCameron, Peter JephsonPraeger, Cheryl E.We give a construction of a family of designs with a specified point-partition and determine the subgroup of automorphisms leaving invariant the point-partition. We give necessary and sufficient conditions for a design in the family to possess a flag-transitive group of automorphisms preserving the specified point-partition. We give examples of flag-transitive designs in the family, including a new symmetric 2-(1408,336,80) design with automorphism group 2^12:((3⋅M22):2) and a construction of one of the families of the symplectic designs (the designs S^−(n) ) exhibiting a flag-transitive, point-imprimitive automorphism group.Permutation groups and transformation semigroups : results and problems
http://hdl.handle.net/10023/8532
J.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group G and a single non-permutation a. Our particular concern is the influence that properties of G (related to homogeneity, transitivity and primitivity) have on the structure of the semigroup. In the first part of the paper, we consider properties of S=<G,a> such as regularity and generation. The second is a brief report on the synchronization project, which aims to decide in what circumstances S contains an element of rank 1. The paper closes with a list of open problems on permutation groups and linear groups, and some comments about the impact on semigroups are provided. These two research directions outlined above lead to very interesting and challenging problems on primitive permutation groups whose solutions require combining results from several different areas of mathematics, certainly fulfilling both of Howie's elegance and value tests in a new and fascinating way.
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/85322015-10-01T00:00:00ZAraujo, JoaoCameron, Peter JephsonJ.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group G and a single non-permutation a. Our particular concern is the influence that properties of G (related to homogeneity, transitivity and primitivity) have on the structure of the semigroup. In the first part of the paper, we consider properties of S=<G,a> such as regularity and generation. The second is a brief report on the synchronization project, which aims to decide in what circumstances S contains an element of rank 1. The paper closes with a list of open problems on permutation groups and linear groups, and some comments about the impact on semigroups are provided. These two research directions outlined above lead to very interesting and challenging problems on primitive permutation groups whose solutions require combining results from several different areas of mathematics, certainly fulfilling both of Howie's elegance and value tests in a new and fascinating way.Guessing games on triangle-free graphs
http://hdl.handle.net/10023/8518
The guessing game introduced by Riis is a variant of the "guessing your own hats" game and can be played on any simple directed graph G on n vertices. For each digraph G, it is proved that there exists a unique guessing number gn(G) associated to the guessing game played on G. When we consider the directed edge to be bidirected, in other words, the graph G is undirected, Christofides and Markström introduced a method to bound the value of the guessing number from below using the fractional clique cover number kappa_f(G). In particular they showed gn(G) >= |V(G)| - kappa_f(G). Moreover, it is pointed out that equality holds in this bound if the underlying undirected graph G falls into one of the following categories: perfect graphs, cycle graphs or their complement. In this paper, we show that there are triangle-free graphs that have guessing numbers which do not meet the fractional clique cover bound. In particular, the famous triangle-free Higman-Sims graph has guessing number at least 77 and at most 78, while the bound given by fractional clique cover is 50.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/85182016-01-01T00:00:00ZCameron, Peter JephsonDang, AnhRiis, SorenThe guessing game introduced by Riis is a variant of the "guessing your own hats" game and can be played on any simple directed graph G on n vertices. For each digraph G, it is proved that there exists a unique guessing number gn(G) associated to the guessing game played on G. When we consider the directed edge to be bidirected, in other words, the graph G is undirected, Christofides and Markström introduced a method to bound the value of the guessing number from below using the fractional clique cover number kappa_f(G). In particular they showed gn(G) >= |V(G)| - kappa_f(G). Moreover, it is pointed out that equality holds in this bound if the underlying undirected graph G falls into one of the following categories: perfect graphs, cycle graphs or their complement. In this paper, we show that there are triangle-free graphs that have guessing numbers which do not meet the fractional clique cover bound. In particular, the famous triangle-free Higman-Sims graph has guessing number at least 77 and at most 78, while the bound given by fractional clique cover is 50.Exploring dependence between categorical variables : benefits and limitations of using variable selection within Bayesian clustering in relation to log-linear modelling with interaction terms
http://hdl.handle.net/10023/8356
This manuscript is concerned with relating two approaches that can be used to explore complex dependence structures between categorical variables, namely Bayesian partitioning of the covariate space incorporating a variable selection procedure that highlights the covariates that drive the clustering, and log-linear modelling with interaction terms. We derive theoretical results on this relation and discuss if they can be employed to assist log-linear model determination, demonstrating advantages and limitations with simulated and real data sets. The main advantage concerns sparse contingency tables. Inferences from clustering can potentially reduce the number of covariates considered and, subsequently, the number of competing log-linear models, making the exploration of the model space feasible. Variable selection within clustering can inform on marginal independence in general, thus allowing for a more efficient exploration of the log-linear model space. However, we show that the clustering structure is not informative on the existence of interactions in a consistent manner. This work is of interest to those who utilize log-linear models, as well as practitioners such as epidemiologists that use clustering models to reduce the dimensionality in the data and to reveal interesting patterns on how covariates combine.
This work was supported by MRC grant G1002319.
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/83562016-06-01T00:00:00ZPapathomas, MichailRichardson, SylviaThis manuscript is concerned with relating two approaches that can be used to explore complex dependence structures between categorical variables, namely Bayesian partitioning of the covariate space incorporating a variable selection procedure that highlights the covariates that drive the clustering, and log-linear modelling with interaction terms. We derive theoretical results on this relation and discuss if they can be employed to assist log-linear model determination, demonstrating advantages and limitations with simulated and real data sets. The main advantage concerns sparse contingency tables. Inferences from clustering can potentially reduce the number of covariates considered and, subsequently, the number of competing log-linear models, making the exploration of the model space feasible. Variable selection within clustering can inform on marginal independence in general, thus allowing for a more efficient exploration of the log-linear model space. However, we show that the clustering structure is not informative on the existence of interactions in a consistent manner. This work is of interest to those who utilize log-linear models, as well as practitioners such as epidemiologists that use clustering models to reduce the dimensionality in the data and to reveal interesting patterns on how covariates combine.Bayesian sequential tests of the initial size of a linear pure death process
http://hdl.handle.net/10023/8286
We provide a recursive algorithm for determining the sampling plans of invariant Bayesian sequential tests of the initial size of a linear pure death process of unknown rate. These tests compare favourably with the corresponding truncated sequential probability ratio tests.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/82862015-05-01T00:00:00ZGoudie, I.B.J.We provide a recursive algorithm for determining the sampling plans of invariant Bayesian sequential tests of the initial size of a linear pure death process of unknown rate. These tests compare favourably with the corresponding truncated sequential probability ratio tests.Using species proportions to quantify turnover in biodiversity
http://hdl.handle.net/10023/8033
Quantifying species turnover is an important aspect of biodiversity monitoring. Turnover measures are usually based on species presence/absence data, reflecting the rate at which species are replaced. However, measures that reflect the rate at which individuals of a species are replaced by individuals of another species are far more sensitive to change. In this paper, we propose families of turnover measures that reflect changes in species proportions. We study the properties of our measures, and use simulation to assess their success in detecting turnover. Using data on the British farmland bird community from the breeding bird survey, we evaluate our measures to quantify temporal turnover and how it varies across the British mainland.
We are very grateful to all the volunteers who have contributed to the BBS. Yuan was funded by EPSRC/NERC grant EP/1000917/1. Harrison was funded by the Scottish Government’s Centre of Expertise ClimateXChange (www.climatexchange.org.uk).
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/80332016-06-01T00:00:00ZYuan, YuanBuckland, Stephen TerrenceHarrison, PhilFoss, SergeyJohnston, AlisonQuantifying species turnover is an important aspect of biodiversity monitoring. Turnover measures are usually based on species presence/absence data, reflecting the rate at which species are replaced. However, measures that reflect the rate at which individuals of a species are replaced by individuals of another species are far more sensitive to change. In this paper, we propose families of turnover measures that reflect changes in species proportions. We study the properties of our measures, and use simulation to assess their success in detecting turnover. Using data on the British farmland bird community from the breeding bird survey, we evaluate our measures to quantify temporal turnover and how it varies across the British mainland.Efficient abstracting of dive profiles using a broken-stick model
http://hdl.handle.net/10023/7972
For diving animals, animal-borne sensors are used to collect time-depth information for studying behaviour, ranging patterns and foraging ecology. Often, this information needs to be compressed for storage or transmission. Widely used devices called conductivity-temperature-depth satellite relay data loggers (CTD-SRDLs) sample time and depth at high resolution during a dive and then abstract the time-depth trajectory using a broken-stick model (BSM). This approximation method can summarize efficiently the curvilinear shape of a dive, using a piecewise linear shape with a small, fixed number of vertices, or break points. We present the process of abstracting dives using the BSM and quantify its performance, by measuring the uncertainty associated with the profiles it produces. We develop a method for obtaining a confidence zone and an index for the goodness-of-fit (dive zone index, DZI) for abstracted dive profiles. We validate our results with a case study using dives from elephant seals (Mirounga spp.). We use generalized additive models (GAMs) to determine whether the DZI can be used as a proxy for an absolute measure of fit and investigate the relationship between the DZI and the dive shape. We found a strong correlation between the residual sum of squares (RSS) for the difference between the detailed and abstracted profiles, and the DZI and maximum residual (R4), for dives resulting from CTD-SRDLs (69% deviance explained). On its own, the DZI explained a lower percentage of deviance which was variable for abstracted dives with different numbers of break points. We also found evidence for systematic differences in the DZI for different dive shapes (65% deviance explained). Although the proportional loss of information in the abstraction of time-depth dive profiles by BSM is high, what remains is sufficient to infer goodness-of-fit of the abstracted profile by reversing the abstraction process. Our results suggest that together the DZI and R4 can be used as a proxy for the RSS, and we present the method for obtaining these metrics for BSM-abstracted profiles.
This work was supported by SMRU Ltd (now SMRU Marine) in the form of a PhD fellowship (T.P.). Completion of the manuscript was supported by a National Research Foundation Scarce Skills Postdoctoral Fellowship at the University of Cape Town, South Africa (T.P.). The CTD-SRDL data presented in this manuscript were collected as part of a project funded by the Natural Environment Research Council (NERC) grants NE/E018289/1 and NER/D/S/2002/00426.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/79722015-03-01T00:00:00ZPhotopoulou, T.Lovell, PhilipFedak, M.A.Thomas, L.Matthiopoulos, J.For diving animals, animal-borne sensors are used to collect time-depth information for studying behaviour, ranging patterns and foraging ecology. Often, this information needs to be compressed for storage or transmission. Widely used devices called conductivity-temperature-depth satellite relay data loggers (CTD-SRDLs) sample time and depth at high resolution during a dive and then abstract the time-depth trajectory using a broken-stick model (BSM). This approximation method can summarize efficiently the curvilinear shape of a dive, using a piecewise linear shape with a small, fixed number of vertices, or break points. We present the process of abstracting dives using the BSM and quantify its performance, by measuring the uncertainty associated with the profiles it produces. We develop a method for obtaining a confidence zone and an index for the goodness-of-fit (dive zone index, DZI) for abstracted dive profiles. We validate our results with a case study using dives from elephant seals (Mirounga spp.). We use generalized additive models (GAMs) to determine whether the DZI can be used as a proxy for an absolute measure of fit and investigate the relationship between the DZI and the dive shape. We found a strong correlation between the residual sum of squares (RSS) for the difference between the detailed and abstracted profiles, and the DZI and maximum residual (R4), for dives resulting from CTD-SRDLs (69% deviance explained). On its own, the DZI explained a lower percentage of deviance which was variable for abstracted dives with different numbers of break points. We also found evidence for systematic differences in the DZI for different dive shapes (65% deviance explained). Although the proportional loss of information in the abstraction of time-depth dive profiles by BSM is high, what remains is sufficient to infer goodness-of-fit of the abstracted profile by reversing the abstraction process. Our results suggest that together the DZI and R4 can be used as a proxy for the RSS, and we present the method for obtaining these metrics for BSM-abstracted profiles.Dose response severity functions for acoustic disturbance in cetaceans using recurrent event survival analysis
http://hdl.handle.net/10023/7845
Behavioral response studies (BRSs) aim to enhance our understanding of the behavior changes made by animals in response to specific exposure levels of different stimuli, often presented in an increasing dosage. Here, we focus on BRSs that aim to understand behavioral responses of free-ranging whales and dolphins to manmade acoustic signals (although the methods are applicable more generally). One desired outcome of these studies is dose-response functions relevant to different species, signals and contexts. We adapted and applied recurrent event survival analysis (Cox proportional hazard models) to data from the 3S BRS project, where multiple behavioral responses of different severities had been observed per experimental exposure and per individual based upon expert scoring. We included species, signal type, exposure number and behavioral state prior to exposure as potential covariates. The best model included all main effect terms, with the exception of exposure number, as well as two interaction terms. The interactions between signal and behavioral state, and between species and behavioral state highlighted that the sensitivity of animals to different signal types (a 6–7 kHz upsweep sonar signal [MFAS] or a 1–2 kHz upsweep sonar signal [LFAS]) depended on their behavioral state (feeding or nonfeeding), and this differed across species. Of the three species included in this analysis (sperm whale [Physeter macrocephalus], killer whale [Orcinus orca] and long-finned pilot whale [Globicephala melas]), killer whales were consistently the most likely to exhibit behavioral responses to naval sonar exposure. We conclude that recurrent event survival analysis provides an effective framework for fitting dose-response severity functions to data from behavioral response studies. It can provide outputs that can help government and industry to evaluate the potential impacts of anthropogenic sound production in the ocean.
Fri, 20 Nov 2015 00:00:00 GMThttp://hdl.handle.net/10023/78452015-11-20T00:00:00ZHarris, Catriona MSadykova, DinaraDe Ruiter, Stacy LynnTyack, Peter LloydMiller, PatrickKvadsheim, PetterLam, Frans-PeterThomas, LenBehavioral response studies (BRSs) aim to enhance our understanding of the behavior changes made by animals in response to specific exposure levels of different stimuli, often presented in an increasing dosage. Here, we focus on BRSs that aim to understand behavioral responses of free-ranging whales and dolphins to manmade acoustic signals (although the methods are applicable more generally). One desired outcome of these studies is dose-response functions relevant to different species, signals and contexts. We adapted and applied recurrent event survival analysis (Cox proportional hazard models) to data from the 3S BRS project, where multiple behavioral responses of different severities had been observed per experimental exposure and per individual based upon expert scoring. We included species, signal type, exposure number and behavioral state prior to exposure as potential covariates. The best model included all main effect terms, with the exception of exposure number, as well as two interaction terms. The interactions between signal and behavioral state, and between species and behavioral state highlighted that the sensitivity of animals to different signal types (a 6–7 kHz upsweep sonar signal [MFAS] or a 1–2 kHz upsweep sonar signal [LFAS]) depended on their behavioral state (feeding or nonfeeding), and this differed across species. Of the three species included in this analysis (sperm whale [Physeter macrocephalus], killer whale [Orcinus orca] and long-finned pilot whale [Globicephala melas]), killer whales were consistently the most likely to exhibit behavioral responses to naval sonar exposure. We conclude that recurrent event survival analysis provides an effective framework for fitting dose-response severity functions to data from behavioral response studies. It can provide outputs that can help government and industry to evaluate the potential impacts of anthropogenic sound production in the ocean.Passive acoustic monitoring of beaked whale densities in the Gulf of Mexico
http://hdl.handle.net/10023/7779
Beaked whales are deep diving elusive animals, difficult to census with conventional visual surveys. Methods are presented for the density estimation of beaked whales, using passive acoustic monitoring data collected at sites in the Gulf of Mexico (GOM) from the period during and following the Deepwater Horizon oil spill (2010–2013). Beaked whale species detected include: Gervais’ (Mesoplodon europaeus), Cuvier’s (Ziphius cavirostris), Blainville’s (Mesoplodon densirostris) and an unknown species of Mesoplodon sp. (designated as Beaked Whale Gulf — BWG). For Gervais’ and Cuvier’s beaked whales, we estimated weekly animal density using two methods, one based on the number of echolocation clicks, and another based on the detection of animal groups during 5 min time-bins. Density estimates derived from these two methods were in good general agreement. At two sites in the western GOM, Gervais’ beaked whales were present throughout the monitoring period, but Cuvier’s beaked whales were present only seasonally, with periods of low density during the summer and higher density in the winter. At an eastern GOM site, both Gervais’ and Cuvier’s beaked whales had a high density throughout the monitoring period.
Funding to support the tag data was also received from the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland) funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions.
Thu, 12 Nov 2015 00:00:00 GMThttp://hdl.handle.net/10023/77792015-11-12T00:00:00ZHildebrand, JohnBaumann-Pickering, SimoneFrasier, KaitlinTrickey, JenniferMerkens, KarlinaWiggins, SeanMcDonald, MarkGarrison, LanceHarris, DanielleMarques, Tiago A.Thomas, LenBeaked whales are deep diving elusive animals, difficult to census with conventional visual surveys. Methods are presented for the density estimation of beaked whales, using passive acoustic monitoring data collected at sites in the Gulf of Mexico (GOM) from the period during and following the Deepwater Horizon oil spill (2010–2013). Beaked whale species detected include: Gervais’ (Mesoplodon europaeus), Cuvier’s (Ziphius cavirostris), Blainville’s (Mesoplodon densirostris) and an unknown species of Mesoplodon sp. (designated as Beaked Whale Gulf — BWG). For Gervais’ and Cuvier’s beaked whales, we estimated weekly animal density using two methods, one based on the number of echolocation clicks, and another based on the detection of animal groups during 5 min time-bins. Density estimates derived from these two methods were in good general agreement. At two sites in the western GOM, Gervais’ beaked whales were present throughout the monitoring period, but Cuvier’s beaked whales were present only seasonally, with periods of low density during the summer and higher density in the winter. At an eastern GOM site, both Gervais’ and Cuvier’s beaked whales had a high density throughout the monitoring period.Status and future of research on the behavioural responses of marine mammals to U.S. Navy sonar
http://hdl.handle.net/10023/7741
A review of the status and future of research into behavioral responses of marine mammals to naval sonar exposure was undertaken to evaluate the return on investment of current US Navy funded programs, identify the data needs and the contributions of current research programs to meeting data needs, and determine the ability to meet outstanding data needs given the current state of technology. As part of this review, a workshop was held from 21-22 April 2015 in Monterey, California. Workshop attendees were key representatives of Navy-funded behavioral response studies, as well as three external reviewers who were selected because of their expertise in animal behavior and behavioral responses to anthropogenic stimuli in the aquatic and terrestrial environments. Prior to the workshop, a questionnaire was circulated to canvass the opinions of members of the scientific community (primarily workshop attendees exclusive of external reviewers) on each of the research approaches taken to address this topic. The workshop was then structured around the questionnaire and responses received, via a series of discussion sessions. Afterwards, each research approach was evaluated independently by the external reviewers. This report presents a synthesis of the evaluations and recommendations of the external reviewers on current and future behavioral response research relevant to naval sonar. All reviewers agreed that excellent progress has been made on this topic and that each of the research approaches has contributed to our understanding of cetacean responses to naval sonar. The report includes specific comments and recommendations of the reviewers relevant to each approach, but also includes suggestions for priority species and a comprehensive list of recommendations for the future of BRS research in general (Tables 1 and 2). In summary it was recommended that BRS research be continued and extended to increase sample sizes and experimental replication, and temporal duration and spatial scale including more research in areas where the animals are presumably more naïve than on the naval ranges. It was noted that future investigations would benefit from combining experimentation and observation to enable linkage of short-term behavioral response to long-term fitness consequences of repeated exposure. Beaked whales were the species group ranked highest in terms of research priority. The importance of baseline studies and longer-term monitoring of animals before and after exposure is emphasized throughout.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/77412015-01-01T00:00:00ZHarris, Catriona MThomas, LenA review of the status and future of research into behavioral responses of marine mammals to naval sonar exposure was undertaken to evaluate the return on investment of current US Navy funded programs, identify the data needs and the contributions of current research programs to meeting data needs, and determine the ability to meet outstanding data needs given the current state of technology. As part of this review, a workshop was held from 21-22 April 2015 in Monterey, California. Workshop attendees were key representatives of Navy-funded behavioral response studies, as well as three external reviewers who were selected because of their expertise in animal behavior and behavioral responses to anthropogenic stimuli in the aquatic and terrestrial environments. Prior to the workshop, a questionnaire was circulated to canvass the opinions of members of the scientific community (primarily workshop attendees exclusive of external reviewers) on each of the research approaches taken to address this topic. The workshop was then structured around the questionnaire and responses received, via a series of discussion sessions. Afterwards, each research approach was evaluated independently by the external reviewers. This report presents a synthesis of the evaluations and recommendations of the external reviewers on current and future behavioral response research relevant to naval sonar. All reviewers agreed that excellent progress has been made on this topic and that each of the research approaches has contributed to our understanding of cetacean responses to naval sonar. The report includes specific comments and recommendations of the reviewers relevant to each approach, but also includes suggestions for priority species and a comprehensive list of recommendations for the future of BRS research in general (Tables 1 and 2). In summary it was recommended that BRS research be continued and extended to increase sample sizes and experimental replication, and temporal duration and spatial scale including more research in areas where the animals are presumably more naïve than on the naval ranges. It was noted that future investigations would benefit from combining experimentation and observation to enable linkage of short-term behavioral response to long-term fitness consequences of repeated exposure. Beaked whales were the species group ranked highest in terms of research priority. The importance of baseline studies and longer-term monitoring of animals before and after exposure is emphasized throughout.A description of LATTE outputs
http://hdl.handle.net/10023/7720
Fri, 30 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/77202015-10-30T00:00:00ZMarques, Tiago A.Thomas, LenImpacts of anthropogenic noise on marine life : publication patterns, new discoveries, and future directions in research and management
http://hdl.handle.net/10023/7640
Anthropogenic underwater noise is now recognized as a world-wide problem, and recent studies have shown a broad range of negative effects in a variety of taxa. Underwater noise from shipping is increasingly recognized as a significant and pervasive pollutant with the potential to impact marine ecosystems on a global scale. We reviewed six regional case studies as examples of recent research and management activities relating to ocean noise in a variety of taxonomic groups, locations, and approaches. However, as no six projects could ever cover all taxa, sites and noise sources, a brief bibliometric analysis places these case studies into the broader historical and topical context of the peer-reviewed ocean noise literature as a whole. The case studies highlighted emerging knowledge of impacts, including the ways that non-injurious effects can still accumulate at the population level, and detailed approaches to guide ocean noise management. They build a compelling case that a number of anthropogenic noise types can affect a variety of marine taxa. Meanwhile, the bibliometric analyses revealed an increasing diversity of ocean noise topics covered and journal outlets since the 1940s. This could be seen in terms of both the expansion of the literature from more physical interests to ecological impacts of noise, management and policy, and consideration of a widening range of taxa. However, if our scientific knowledge base is ever to get ahead of the curve of rapid industrialization of the ocean, we are going to have to identify naïve populations and relatively pristine seas, and construct mechanistic models, so that we can predict impacts before they occur, and guide effective mitigation for the most vulnerable populations.
Funding for R. Bruintjes, J. Purser, A. N. Radford, S. D, Simpson and M. A. Wale was provided by the UK Department for Environment Food and Rural Affairs (Defra). N.D. Merchant received travel funding from Ocean Networks Canada. RW was supported by a Marie Curie International Incoming Fellowship within the 7th European Community Framework Programme (Project CONCEAL, FP7, PIIF-GA-2009-253407), and received travel funding to attend IMCC3 from the Society for Conservation Biology (SCB) Marine Section and the International Whaling Commission’s Climate Change steering group (with thanks to Mark Simmonds). A.J. Wright also received travel funding to attend IMCC3 from the SCB Marine Section. Date of Acceptance: 28/05/2015
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/76402015-10-01T00:00:00ZWilliams, RobertWright, Andrew JAshe, ErinBlight, LKBruintjes, RCanessa, RClark, CWCullis-Suzuki, SDakin, DTErbe, CHammond, Philip StevenMerchant, MDO'Hara, PDPurser, JRadford, ANSimpson, SDThomas, LenWale, MAAnthropogenic underwater noise is now recognized as a world-wide problem, and recent studies have shown a broad range of negative effects in a variety of taxa. Underwater noise from shipping is increasingly recognized as a significant and pervasive pollutant with the potential to impact marine ecosystems on a global scale. We reviewed six regional case studies as examples of recent research and management activities relating to ocean noise in a variety of taxonomic groups, locations, and approaches. However, as no six projects could ever cover all taxa, sites and noise sources, a brief bibliometric analysis places these case studies into the broader historical and topical context of the peer-reviewed ocean noise literature as a whole. The case studies highlighted emerging knowledge of impacts, including the ways that non-injurious effects can still accumulate at the population level, and detailed approaches to guide ocean noise management. They build a compelling case that a number of anthropogenic noise types can affect a variety of marine taxa. Meanwhile, the bibliometric analyses revealed an increasing diversity of ocean noise topics covered and journal outlets since the 1940s. This could be seen in terms of both the expansion of the literature from more physical interests to ecological impacts of noise, management and policy, and consideration of a widening range of taxa. However, if our scientific knowledge base is ever to get ahead of the curve of rapid industrialization of the ocean, we are going to have to identify naïve populations and relatively pristine seas, and construct mechanistic models, so that we can predict impacts before they occur, and guide effective mitigation for the most vulnerable populations.Procedure description : using AUTEC’s hydrophones surrounding a DTAGed whale to obtain localizations
http://hdl.handle.net/10023/7523
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/75232015-01-01T00:00:00ZMarques, Tiago A.Shaeffer, JessicaMoretti, DavidThomas, LenCircular designs balanced for neighbours at distances one and two
http://hdl.handle.net/10023/7454
We define three types of neighbour-balanced designs for experiments where the units are arranged in a circle or single line in space or time. The designs are balanced with respect to neighbours at distance one and at distance two. The variants come from allowing or forbidding self-neighbours, and from considering neighbours to be directed or undirected. For two of the variants, we give a method of constructing a design for all values of the number of treatments, except for some small values where it is impossible. In the third case, we give a partial solution that covers all sizes likely to be used in practice.
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/74542014-12-01T00:00:00ZAldred, R. E. L.Bailey, R. A.Mckay, Brendan D.Wanless, Ian M.We define three types of neighbour-balanced designs for experiments where the units are arranged in a circle or single line in space or time. The designs are balanced with respect to neighbours at distance one and at distance two. The variants come from allowing or forbidding self-neighbours, and from considering neighbours to be directed or undirected. For two of the variants, we give a method of constructing a design for all values of the number of treatments, except for some small values where it is impossible. In the third case, we give a partial solution that covers all sizes likely to be used in practice.Evaluating the utility of B/Ca ratios in planktic foraminifera as a proxy for the carbonate system : a case study of Globigerinoides ruber
http://hdl.handle.net/10023/7133
B/Ca ratios in foraminifera have attracted considerable scientific attention as a proxy for past ocean carbonate system. However, the carbonate system controls on B/Ca ratios are not straightforward, with Δ[ CO32-] ([ CO32-]in situ - [ CO32-]at saturation) correlating best with B/Ca ratios in benthic foraminifera, rather than pH, B(OH)4-/HCO3-, or B(OH)4-/DIC (as a simple model of boron speciation in seawater and incorporation into CaCO3 would predict). Furthermore, culture experiments have shown that in planktic foraminifera properties such as salinity and [B]SW can have profound effects on B/Ca ratios beyond those predicted by simple partition coefficients. Here, we investigate the controls on B/Ca ratios in G. ruber via a combination of culture experiments and core-top measurements, and add to a growing body of evidence that suggests B/Ca ratios in symbiont-bearing foraminiferal carbonate are not a straightforward proxy for past seawater carbonate system conditions. We find that while B/Ca ratios in culture experiments covary with pH, in open ocean sediments this relationship is not seen. In fact, our B/Ca data correlate best with [ PO43-] (a previously undocumented association) and in most regions, salinity. These findings might suggest a precipitation rate or crystallographic control on boron incorporation into foraminiferal calcite. Regardless, our results underscore the need for caution when attempting to interpret B/Ca records in terms of the ocean carbonate system, at the very least in the case of mixed-layer planktic foraminifera.
This research was funded by NERC, Grant Number: NE/D00876X/2.
Wed, 01 Apr 2015 00:00:00 GMThttp://hdl.handle.net/10023/71332015-04-01T00:00:00ZHenehan, M.J.Foster, G.L.Rae, J.W.B.Prentice, K.C.Erez, J.Bostock, H.C.Marshall, B.J.Wilson, P.A.B/Ca ratios in foraminifera have attracted considerable scientific attention as a proxy for past ocean carbonate system. However, the carbonate system controls on B/Ca ratios are not straightforward, with Δ[ CO32-] ([ CO32-]in situ - [ CO32-]at saturation) correlating best with B/Ca ratios in benthic foraminifera, rather than pH, B(OH)4-/HCO3-, or B(OH)4-/DIC (as a simple model of boron speciation in seawater and incorporation into CaCO3 would predict). Furthermore, culture experiments have shown that in planktic foraminifera properties such as salinity and [B]SW can have profound effects on B/Ca ratios beyond those predicted by simple partition coefficients. Here, we investigate the controls on B/Ca ratios in G. ruber via a combination of culture experiments and core-top measurements, and add to a growing body of evidence that suggests B/Ca ratios in symbiont-bearing foraminiferal carbonate are not a straightforward proxy for past seawater carbonate system conditions. We find that while B/Ca ratios in culture experiments covary with pH, in open ocean sediments this relationship is not seen. In fact, our B/Ca data correlate best with [ PO43-] (a previously undocumented association) and in most regions, salinity. These findings might suggest a precipitation rate or crystallographic control on boron incorporation into foraminiferal calcite. Regardless, our results underscore the need for caution when attempting to interpret B/Ca records in terms of the ocean carbonate system, at the very least in the case of mixed-layer planktic foraminifera.Spatial variation in maximum dive depth in gray seals in relation to foraging.
http://hdl.handle.net/10023/6886
Habitat preference maps are a way of representing animals’ space use in two dimensions. For marine animals, the third dimension is an important aspect of spatial ecology. We used dive data from seven gray seals Halichoerus grypus (a primarily benthic forager) collected with GPS phone tags (Sea Mammal Research Unit) to investigate the distribution of the maximum depth visited in each dive. We modeled maximum dive depth as a function of spatiotemporal covariates using a generalized additive mixed model (GAMM) with individual as a random effect. Bathymetry, horizontal displacement, latitude and longitude, Julian day, sediment type, and light conditions accounted for 37% of the variability in the data. Persistent patterns of autocorrelation in the raw data suggest that individual intrinsic rhythm might be an important factor, not captured by external covariates. The strength of using this statistical method to generate spatial predictions of the distribution of maximum dive depth is its applicability to other plunge and pursuit divers. Despite being predictions of a point estimate, these maps provide some insight into the third dimension of habitat use in marine animals. The capacity to predict this aspect of vertical habitat use may help avoid conflict between animal habitat and coastal or offshore developments
Theoni Photopoulou was funded by SMRU Ltd in the form of a Ph.D. studentship, 2008–2012.
Tue, 01 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/68862014-07-01T00:00:00ZPhotopoulou, TheoniFedak, MikeThomas, LenMatthiopoulos, JasonHabitat preference maps are a way of representing animals’ space use in two dimensions. For marine animals, the third dimension is an important aspect of spatial ecology. We used dive data from seven gray seals Halichoerus grypus (a primarily benthic forager) collected with GPS phone tags (Sea Mammal Research Unit) to investigate the distribution of the maximum depth visited in each dive. We modeled maximum dive depth as a function of spatiotemporal covariates using a generalized additive mixed model (GAMM) with individual as a random effect. Bathymetry, horizontal displacement, latitude and longitude, Julian day, sediment type, and light conditions accounted for 37% of the variability in the data. Persistent patterns of autocorrelation in the raw data suggest that individual intrinsic rhythm might be an important factor, not captured by external covariates. The strength of using this statistical method to generate spatial predictions of the distribution of maximum dive depth is its applicability to other plunge and pursuit divers. Despite being predictions of a point estimate, these maps provide some insight into the third dimension of habitat use in marine animals. The capacity to predict this aspect of vertical habitat use may help avoid conflict between animal habitat and coastal or offshore developmentsAnnual report on the implementation of Council Regulation (EC) No 812/2004 during 2014
http://hdl.handle.net/10023/6855
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/68552015-06-01T00:00:00ZNorthridge, SimonKingston, AlThomas, LenUsing accelerometers to determine the calling behavior of tagged baleen whales
http://hdl.handle.net/10023/6623
Low-frequency acoustic signals generated by baleen whales can propagate over vast distances, making the assignment of calls to specific individuals problematic. Here, we report the novel use of acoustic recording tags equipped with high-resolution accelerometers to detect vibrations from the surface of two tagged fin whales that directly match the timing of recorded acoustic signals. A tag deployed on a buoy in the vicinity of calling fin whales and a recording from a tag that had just fallen off a whale were able to detect calls acoustically but did not record corresponding accelerometer signals that were measured on calling individuals. Across the hundreds of calls measured on two tagged fin whales, the accelerometer response was generally anisotropic across all three axes, appeared to depend on tag placement and increased with the level of received sound. These data demonstrate that high-sample rate accelerometry can provide important insights into the acoustic behavior of baleen whales that communicate at low frequencies. This method helps identify vocalizing whales, which in turn enables the quantification of call rates, a fundamental component of models used to estimate baleen whale abundance and distribution from passive acoustic monitoring.
Tue, 15 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/66232014-07-15T00:00:00ZGoldbogen, JeremyDe Ruiter, Stacy LynnStimpert, AlisonCalambokidis, JohnFriedlaender, AriSchorr, GregMoretti, DavidTyack, Peter LloydSouthall, BrandonLow-frequency acoustic signals generated by baleen whales can propagate over vast distances, making the assignment of calls to specific individuals problematic. Here, we report the novel use of acoustic recording tags equipped with high-resolution accelerometers to detect vibrations from the surface of two tagged fin whales that directly match the timing of recorded acoustic signals. A tag deployed on a buoy in the vicinity of calling fin whales and a recording from a tag that had just fallen off a whale were able to detect calls acoustically but did not record corresponding accelerometer signals that were measured on calling individuals. Across the hundreds of calls measured on two tagged fin whales, the accelerometer response was generally anisotropic across all three axes, appeared to depend on tag placement and increased with the level of received sound. These data demonstrate that high-sample rate accelerometry can provide important insights into the acoustic behavior of baleen whales that communicate at low frequencies. This method helps identify vocalizing whales, which in turn enables the quantification of call rates, a fundamental component of models used to estimate baleen whale abundance and distribution from passive acoustic monitoring.Nested row-column designs for near-factorial experiments with two treatment factors and one control treatment
http://hdl.handle.net/10023/6556
This paper presents some methods of designing experiments in a block design with nested rows and columns. The treatments consist of all combinations of levels of two treatment factors, with an additional control treatment.
The authors also thank Queen Mary, University of London, the University of St Andrews and the Poznan University of Life Sciences for financial support. The second author was also supported by the British-Polish Young Scientists Programme, grant WAR/342/116.
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/65562015-10-01T00:00:00ZBailey, Rosemary AnneLacka, AgnieszkaThis paper presents some methods of designing experiments in a block design with nested rows and columns. The treatments consist of all combinations of levels of two treatment factors, with an additional control treatment.The effect of animal movement on line transect estimates of abundance
http://hdl.handle.net/10023/6466
Line transect sampling is a distance sampling method for estimating the abundance of wild animal populations. One key assumption of this method is that all animals are detected at their initial location. Animal movement independent of the transect and observer can thus cause substantial bias. We present an analytic expression for this bias when detection within the transect is certain (strip transect sampling) and use simulation to quantify bias when detection falls off with distance from the line (line transect sampling). We also explore the non-linear relationship between bias, detection, and animal movement by varying detectability and movement type. We consider animals that move in randomly orientated straight lines, which provides an upper bound on bias, and animals that are constrained to a home range of random radius. We find that bias is reduced when animal movement is constrained, and bias is considerably smaller in line transect sampling than strip transect sampling provided that mean animal speed is less than observer speed. By contrast, when mean animal speed exceeds observer speed the bias in line transect sampling becomes comparable with, and may exceed, that of strip transect sampling. Bias from independent animal movement is reduced by the observer searching further perpendicular to the transect, searching a shorter distance ahead and by ignoring animals that may overtake the observer from behind. However, when animals move in response to the observer, the standard practice of searching further ahead should continue as the bias from responsive movement is often greater than that from independent movement.
This work was supported by the University of St Andrews (http://www.st-andrews.ac.uk/; RG, STB, LT) and by a summer scholarship and PhD grant from The Carnegie Trust for the Universities of Scotland (http://www.carnegie-trust.org/) to RG.
Mon, 23 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/64662015-03-23T00:00:00ZGlennie, R.Buckland, S.T.Thomas, L.Line transect sampling is a distance sampling method for estimating the abundance of wild animal populations. One key assumption of this method is that all animals are detected at their initial location. Animal movement independent of the transect and observer can thus cause substantial bias. We present an analytic expression for this bias when detection within the transect is certain (strip transect sampling) and use simulation to quantify bias when detection falls off with distance from the line (line transect sampling). We also explore the non-linear relationship between bias, detection, and animal movement by varying detectability and movement type. We consider animals that move in randomly orientated straight lines, which provides an upper bound on bias, and animals that are constrained to a home range of random radius. We find that bias is reduced when animal movement is constrained, and bias is considerably smaller in line transect sampling than strip transect sampling provided that mean animal speed is less than observer speed. By contrast, when mean animal speed exceeds observer speed the bias in line transect sampling becomes comparable with, and may exceed, that of strip transect sampling. Bias from independent animal movement is reduced by the observer searching further perpendicular to the transect, searching a shorter distance ahead and by ignoring animals that may overtake the observer from behind. However, when animals move in response to the observer, the standard practice of searching further ahead should continue as the bias from responsive movement is often greater than that from independent movement.Mixture models for distance sampling detection functions
http://hdl.handle.net/10023/6463
We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used “key function plus series adjustment” (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.
Funding: EPSRC DTG
Fri, 20 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/64632015-03-20T00:00:00ZMiller, David LawrenceThomas, LenWe present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used “key function plus series adjustment” (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.Most switching classes with primitive automorphism groups contain graphs with trivial groups
http://hdl.handle.net/10023/6429
The operation of switching a graph Gamma with respect to a subset X of the vertex set interchanges edges and non-edges between X and its complement, leaving the rest of the graph unchanged. This is an equivalence relation on the set of graphs on a given vertex set, so we can talk about the automorphism group of a switching class of graphs. It might be thought that switching classes with many automorphisms would have the property that all their graphs also have many automorphisms. But the main theorem of this paper shows a different picture: with finitely many exceptions, if a non-trivial switching class S has primitive automorphism group, then it contains a graph whose automorphism group is trivial. We also find all the exceptional switching classes; up to complementation, there are just six.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/64292015-06-01T00:00:00ZCameron, Peter JephsonSpiga, PabloThe operation of switching a graph Gamma with respect to a subset X of the vertex set interchanges edges and non-edges between X and its complement, leaving the rest of the graph unchanged. This is an equivalence relation on the set of graphs on a given vertex set, so we can talk about the automorphism group of a switching class of graphs. It might be thought that switching classes with many automorphisms would have the property that all their graphs also have many automorphisms. But the main theorem of this paper shows a different picture: with finitely many exceptions, if a non-trivial switching class S has primitive automorphism group, then it contains a graph whose automorphism group is trivial. We also find all the exceptional switching classes; up to complementation, there are just six.Random coeffcient models for complex longitudinal data
http://hdl.handle.net/10023/6386
Longitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data.
Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions.
Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.
Fri, 27 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/63862014-06-27T00:00:00ZKidney, DarrenLongitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data.
Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions.
Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.Statistical ecology comes of age
http://hdl.handle.net/10023/6128
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1 –4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.
Wed, 24 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/61282014-12-24T00:00:00ZGimenez, OlivierBuckland, Stephen TerrenceMorgan, Byron J. T.Bez, NicolasBertrand, SophieChoquet, RemiDray, StephaneEtienne, Marie-PierreFewster, RachelGosselin, FredericMerigot, BastienMonestiez, PascalMorales, Juan M.Mortier, FredericMunoz, FrancoisOvaskainen, OtsoPavoine, SandrinePradel, RogerSchurr, Frank M.Thomas, LenThuiller, WilfriedTrenkel, Verenade Valpine, PerryRexstad, EricThe desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1 –4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.Inter-annual and seasonal trends in cetacean distribution, density and abundance off southern California
http://hdl.handle.net/10023/6088
Trends in cetacean density and distribution off southern California were assessed through visual line-transect surveys during thirty-seven California Cooperative Oceanic Fisheries Investigations (CalCOFI) cruises from July 2004–November 2013. From sightings of the six most commonly encountered cetacean species, seasonal, annual and overall density estimates were calculated. Blue whales (Balaenoptera musculus), fin whales (Balaenoptera physalus) and humpback whales (Megaptera novaeangliae) were the most frequently sighted baleen whales with overall densities of 0.91/1000 km2 (CV=0.27), 2.73/1000 km2 (CV=0.19), and 1.17/1000 km2 (CV=0.21) respectively. Species specific density estimates, stratified by cruise, were analyzed using a generalized additive model to estimate long-term trends and correct for seasonal imbalances. Variances were estimated using a non-parametric bootstrap with one day of effort as the sampling unit. Blue whales were primarily observed during summer and fall while fin and humpback whales were observed year-round with peaks in density during summer and spring respectively. Short-beaked common dolphins (Delphinus delphis), Pacific white-sided dolphins (Lagenorhynchus obliquidens) and Dall’s porpoise (Phocoenoidesdalli) were the most frequently encountered small cetaceans with overall densities of 705.83/1000 km2 (CV=0.22), 51.98/1000 km2 (CV=0.27), and 21.37/1000 km2 (CV=0.19) respectively. Seasonally, short-beaked common dolphins were most abundant in winter whereas Pacific white-sided dolphins and Dall’s porpoise were most abundant during spring. There were no significant long-term changes in blue whale, fin whale, humpback whale, short-beaked common dolphin or Dall’s porpoise densities while Pacific white-sided dolphins exhibited a significant decrease in density across the ten-year study. The results from this study were fundamentally consistent with earlier studies, but provide greater temporal and seasonal resolution
Funding was provided by the Chief of Naval Operations Environmental Readiness Division, the United States Navy’s Pacific Fleet, the Naval Postgraduate School Grant #N00244-11-1-027, and the Naval Facilities Engineering Command Living Marine Resources Program.
Sun, 01 Feb 2015 00:00:00 GMThttp://hdl.handle.net/10023/60882015-02-01T00:00:00ZCampbell, G.S.Thomas, L.Whitaker, K.Douglas, A.B.Calambokidis, J.Hildebrand, J.A.Trends in cetacean density and distribution off southern California were assessed through visual line-transect surveys during thirty-seven California Cooperative Oceanic Fisheries Investigations (CalCOFI) cruises from July 2004–November 2013. From sightings of the six most commonly encountered cetacean species, seasonal, annual and overall density estimates were calculated. Blue whales (Balaenoptera musculus), fin whales (Balaenoptera physalus) and humpback whales (Megaptera novaeangliae) were the most frequently sighted baleen whales with overall densities of 0.91/1000 km2 (CV=0.27), 2.73/1000 km2 (CV=0.19), and 1.17/1000 km2 (CV=0.21) respectively. Species specific density estimates, stratified by cruise, were analyzed using a generalized additive model to estimate long-term trends and correct for seasonal imbalances. Variances were estimated using a non-parametric bootstrap with one day of effort as the sampling unit. Blue whales were primarily observed during summer and fall while fin and humpback whales were observed year-round with peaks in density during summer and spring respectively. Short-beaked common dolphins (Delphinus delphis), Pacific white-sided dolphins (Lagenorhynchus obliquidens) and Dall’s porpoise (Phocoenoidesdalli) were the most frequently encountered small cetaceans with overall densities of 705.83/1000 km2 (CV=0.22), 51.98/1000 km2 (CV=0.27), and 21.37/1000 km2 (CV=0.19) respectively. Seasonally, short-beaked common dolphins were most abundant in winter whereas Pacific white-sided dolphins and Dall’s porpoise were most abundant during spring. There were no significant long-term changes in blue whale, fin whale, humpback whale, short-beaked common dolphin or Dall’s porpoise densities while Pacific white-sided dolphins exhibited a significant decrease in density across the ten-year study. The results from this study were fundamentally consistent with earlier studies, but provide greater temporal and seasonal resolutionHigher biodiversity is required to sustain multiple ecosystem processes across temperature regimes
http://hdl.handle.net/10023/5975
Biodiversity loss is occurring rapidly worldwide, yet it is uncertain whether few or many species are required to sustain ecosystem functioning in the face of environmental change. The importance of biodiversity might be enhanced when multiple ecosystem processes (termed multifunctionality) and environmental contexts are considered, yet no studies have quantified this explicitly to date. We measured five key processes and their combined multifunctionality at three temperatures (5, 10 and 15 °C) in freshwater aquaria containing different animal assemblages (1-4 benthic macroinvertebrate species). For single processes, biodiversity effects were weak and were best predicted by additive-based models, i.e. polyculture performances represented the sum of their monoculture parts. There were, however, significant effects of biodiversity on multifunctionality at the low and the high (but not the intermediate) temperature. Variation in the contribution of species to processes across temperatures meant that greater biodiversity was required to sustain multifunctionality across different temperatures than was the case for single processes. This suggests that previous studies might have underestimated the importance of biodiversity in sustaining ecosystem functioning in a changing environment.
The authors thank the Natural Environment Research Council for financial support awarded to G. W. (Grant reference: NE/D013305/1) that funded D. M. P.'s research. Accepted 11 July 2014.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/59752015-01-01T00:00:00ZPerkins, D.M.Bailey, R.A.Dossena, M.Gamfeldt, L.Reiss, J.Trimmer, M.Woodward, G.Biodiversity loss is occurring rapidly worldwide, yet it is uncertain whether few or many species are required to sustain ecosystem functioning in the face of environmental change. The importance of biodiversity might be enhanced when multiple ecosystem processes (termed multifunctionality) and environmental contexts are considered, yet no studies have quantified this explicitly to date. We measured five key processes and their combined multifunctionality at three temperatures (5, 10 and 15 °C) in freshwater aquaria containing different animal assemblages (1-4 benthic macroinvertebrate species). For single processes, biodiversity effects were weak and were best predicted by additive-based models, i.e. polyculture performances represented the sum of their monoculture parts. There were, however, significant effects of biodiversity on multifunctionality at the low and the high (but not the intermediate) temperature. Variation in the contribution of species to processes across temperatures meant that greater biodiversity was required to sustain multifunctionality across different temperatures than was the case for single processes. This suggests that previous studies might have underestimated the importance of biodiversity in sustaining ecosystem functioning in a changing environment.A unifying model for capture-recapture and distance sampling surveys of wildlife populations
http://hdl.handle.net/10023/5797
Spatially explicit capture-recapture (SECR) methods extend traditional capture-recapture methods for estimating population density by using information contained in the location of traps. The The central feature of the improvement is estimation from the locations of traps at which animals were and were not captured to estimate of the distance over which animals are susceptible to capture. We show that standard SECR models are a special case of a more general class of model in which animal detection is not certain, but some information is available about the location of detected animals. The model class accommodates a range of spatial data types and includes as a special case mark-recapture distance sampling, where distances to detected animals are recorded by multiple observers. Other examples of additional information that can be included are bearing to detected animals, strength of acoustic signals received from detected animals, and time of arrival of acoustic signals at detectors. Errors in variables are easily incorporated. We illustrate the versatility of the model and method through a number of applications, in each case using real and simulated data, and comparing our results with those from previous studies where these are available.
Funding: Part-funded by Fundacao Nacional para a Cienca e Technologia, Portugal (FCT) under the project PEst OE/MAT/UI0006/2011 (Marques) and the UK Engineering and Physical Sciences Research Council EP/I000917/1
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/57972015-01-01T00:00:00ZBorchers, D. L.Stevenson, B.C.Kidney, D.Thomas, L.Marques, Tiago A.Spatially explicit capture-recapture (SECR) methods extend traditional capture-recapture methods for estimating population density by using information contained in the location of traps. The The central feature of the improvement is estimation from the locations of traps at which animals were and were not captured to estimate of the distance over which animals are susceptible to capture. We show that standard SECR models are a special case of a more general class of model in which animal detection is not certain, but some information is available about the location of detected animals. The model class accommodates a range of spatial data types and includes as a special case mark-recapture distance sampling, where distances to detected animals are recorded by multiple observers. Other examples of additional information that can be included are bearing to detected animals, strength of acoustic signals received from detected animals, and time of arrival of acoustic signals at detectors. Errors in variables are easily incorporated. We illustrate the versatility of the model and method through a number of applications, in each case using real and simulated data, and comparing our results with those from previous studies where these are available.Acoustic and foraging behavior of a Baird’s beaked whale, Berardius bairdii, exposed to simulated sonar
http://hdl.handle.net/10023/5787
Beaked whales are hypothesized to be particularly sensitive to anthropogenic noise, based on previous strandings and limited experimental and observational data. However, few species have been studied in detail. We describe the underwater behavior of a Baird's beaked whale (Berardius bairdii) from the first deployment of a multi-sensor acoustic tag on this species. The animal exhibited shallow (23 ± 15 m max depth), intermediate (324 ± 49 m), and deep (1138 ± 243 m) dives. Echolocation clicks were produced with a mean inter-click interval of approximately 300 ms and peak frequency of 25 kHz. Two deep dives included presumed foraging behavior, with echolocation pulsed sounds (presumed prey capture attempts) associated with increased maneuvering, and sustained inverted swimming during the bottom phase of the dive. A controlled exposure to simulated mid-frequency active sonar (3.5–4 kHz) was conducted 4 hours after tag deployment, and within 3 minutes of exposure onset, the tagged whale increased swim speed and body movement, and continued to show unusual dive behavior for each of its next three dives, one of each type. These are the first data on the acoustic foraging behavior in this largest beaked whale species, and the first experimental demonstration of a response to simulated sonar.
Research was supported by the US Navy Chief of Naval Operations, Environmental Readiness Program, the Office of Naval Research, the Naval Postgraduate School, and the National Research Council.
Thu, 13 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57872014-11-13T00:00:00ZStimpert, AlisonDe Ruiter, Stacy LynnSouthall, BrandonMoretti, DavidFalcone, ErinGoldbogen, JeremyFriedlaender, AriSchorr, GregCalambokidis, JohnBeaked whales are hypothesized to be particularly sensitive to anthropogenic noise, based on previous strandings and limited experimental and observational data. However, few species have been studied in detail. We describe the underwater behavior of a Baird's beaked whale (Berardius bairdii) from the first deployment of a multi-sensor acoustic tag on this species. The animal exhibited shallow (23 ± 15 m max depth), intermediate (324 ± 49 m), and deep (1138 ± 243 m) dives. Echolocation clicks were produced with a mean inter-click interval of approximately 300 ms and peak frequency of 25 kHz. Two deep dives included presumed foraging behavior, with echolocation pulsed sounds (presumed prey capture attempts) associated with increased maneuvering, and sustained inverted swimming during the bottom phase of the dive. A controlled exposure to simulated mid-frequency active sonar (3.5–4 kHz) was conducted 4 hours after tag deployment, and within 3 minutes of exposure onset, the tagged whale increased swim speed and body movement, and continued to show unusual dive behavior for each of its next three dives, one of each type. These are the first data on the acoustic foraging behavior in this largest beaked whale species, and the first experimental demonstration of a response to simulated sonar.Optimal cross-over designs for full interaction models
http://hdl.handle.net/10023/5768
We consider repeated measurement designs when a residual or carry-over effect may be present in at most one later period. Since assuming an additive model may be unrealistic for some applications and leads to biased estimation of treatment effects, we consider a model with interactions between carry-over and direct treatment effects. When the aim of the experiment is to study the effects of a treatment used alone, we obtain universally optimal approximate designs. We also propose some efficient designs with a reduced number of subjects.
July 2014
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57682014-11-01T00:00:00ZBailey, Rosemary AnneDruilhet, PierreWe consider repeated measurement designs when a residual or carry-over effect may be present in at most one later period. Since assuming an additive model may be unrealistic for some applications and leads to biased estimation of treatment effects, we consider a model with interactions between carry-over and direct treatment effects. When the aim of the experiment is to study the effects of a treatment used alone, we obtain universally optimal approximate designs. We also propose some efficient designs with a reduced number of subjects.Computing in permutation groups without memory
http://hdl.handle.net/10023/5727
Memoryless computation is a new technique to compute any function of a set of registers by updating one register at a time while using no memory. Its aim is to emulate how computations are performed in modern cores, since they typically involve updates of single registers. The memoryless computation model can be fully expressed in terms of transformation semigroups, or in the case of bijective functions, permutation groups. In this paper, we consider how efficiently permutations can be computed without memory. We determine the minimum number of basic updates required to compute any permutation, or any even permutation. The small number of required instructions shows that very small instruction sets could be encoded on cores to perform memoryless computation. We then start looking at a possible compromise between the size of the instruction set and the length of the resulting programs. We consider updates only involving a limited number of registers. In particular, we show that binary instructions are not enough to compute all permutations without memory when the alphabet size is even. These results, though expressed as properties of special generating sets of the symmetric or alternating groups, provide guidelines on the implementation of memoryless computation.
Funding: UK Engineering and Physical Sciences Research Council (EP/K033956/1)
Sun, 02 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57272014-11-02T00:00:00ZCameron, Peter JephsonFairbairn, BenGadouleau, MaximilienMemoryless computation is a new technique to compute any function of a set of registers by updating one register at a time while using no memory. Its aim is to emulate how computations are performed in modern cores, since they typically involve updates of single registers. The memoryless computation model can be fully expressed in terms of transformation semigroups, or in the case of bijective functions, permutation groups. In this paper, we consider how efficiently permutations can be computed without memory. We determine the minimum number of basic updates required to compute any permutation, or any even permutation. The small number of required instructions shows that very small instruction sets could be encoded on cores to perform memoryless computation. We then start looking at a possible compromise between the size of the instruction set and the length of the resulting programs. We consider updates only involving a limited number of registers. In particular, we show that binary instructions are not enough to compute all permutations without memory when the alphabet size is even. These results, though expressed as properties of special generating sets of the symmetric or alternating groups, provide guidelines on the implementation of memoryless computation.Computing in matrix groups without memory
http://hdl.handle.net/10023/5715
Memoryless computation is a novel means of computing any function of a set of registers by updating one register at a time while using no memory. We aim to emulate how computations are performed on modern cores, since they typically involve updates of single registers. The computation model of memoryless computation can be fully expressed in terms of transformation semigroups, or in the case of bijective functions, permutation groups. In this paper, we view registers as elements of a finite field and we compute linear permutations without memory. We first determine the maximum complexity of a linear function when only linear instructions are allowed. We also determine which linear functions are hardest to compute when the field in question is the binary field and the number of registers is even. Secondly, we investigate some matrix groups, thus showing that the special linear group is internally computable but not fast. Thirdly, we determine the smallest set of instructions required to generate the special and general linear groups. These results are important for memoryless computation, for they show that linear functions can be computed very fast or that very few instructions are needed to compute any linear function. They thus indicate new advantages of using memoryless computation.
Funding: UK Engineering and Physical Sciences Research Council award EP/K033956/1
Sun, 02 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57152014-11-02T00:00:00ZCameron, Peter JephsonFairbairn, BenGadouleau, MaximilienMemoryless computation is a novel means of computing any function of a set of registers by updating one register at a time while using no memory. We aim to emulate how computations are performed on modern cores, since they typically involve updates of single registers. The computation model of memoryless computation can be fully expressed in terms of transformation semigroups, or in the case of bijective functions, permutation groups. In this paper, we view registers as elements of a finite field and we compute linear permutations without memory. We first determine the maximum complexity of a linear function when only linear instructions are allowed. We also determine which linear functions are hardest to compute when the field in question is the binary field and the number of registers is even. Secondly, we investigate some matrix groups, thus showing that the special linear group is internally computable but not fast. Thirdly, we determine the smallest set of instructions required to generate the special and general linear groups. These results are important for memoryless computation, for they show that linear functions can be computed very fast or that very few instructions are needed to compute any linear function. They thus indicate new advantages of using memoryless computation.Most primitive groups are full automorphism groups of edge-transitive hypergraphs
http://hdl.handle.net/10023/5580
We prove that, for a primitive permutation group G acting on a set of size n, other than the alternating group, the probability that Aut(X,YG) = G for a random subset Y of X, tends to 1 as n tends to infinity. So the property of the title holds for all primitive groups except the alternating groups and finitely many others. This answers a question of M. Klin. Moreover, we give an upper bound n1/2+ε for the minimum size of the edges in such a hypergraph. This is essentially best possible.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/55802015-01-01T00:00:00ZBabai, LaszloCameron, Peter JephsonWe prove that, for a primitive permutation group G acting on a set of size n, other than the alternating group, the probability that Aut(X,YG) = G for a random subset Y of X, tends to 1 as n tends to infinity. So the property of the title holds for all primitive groups except the alternating groups and finitely many others. This answers a question of M. Klin. Moreover, we give an upper bound n1/2+ε for the minimum size of the edges in such a hypergraph. This is essentially best possible.The effects of acoustic misclassification on cetacean species abundance estimation
http://hdl.handle.net/10023/5163
To estimate the density or abundance of a cetacean species using acoustic detection data, it is necessary to correctly identify the species that are detected. Developing an automated species classifier with 100% correct classification rate for any species is likely to stay out of reach. It is therefore necessary to consider the effect of misidentified detections on the number of observed data and consequently on abundance or density estimation, and develop methods to cope with these misidentifications. If misclassification rates are known, it is possible to estimate the true numbers of detected calls without bias. However, misclassification and uncertainties in the level of misclassification increase the variance of the estimates. If the true numbers of calls from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty around the classification probabilities does not have an overly detrimental effect on the overall variance. However, if there is a difference in the encounter rate between species calls and/or a large amount of uncertainty in misclassification rates, then the variance of the estimates becomes very large and this dramatically increases the variance of the final abundance estimate.
This work was funded through the Natural Environment Research Council and SMRU Ltd.
Wed, 25 Dec 2013 00:00:00 GMThttp://hdl.handle.net/10023/51632013-12-25T00:00:00ZCaillat, Marjolaine AnnieThomas, LenGillespie, Douglas MichaelTo estimate the density or abundance of a cetacean species using acoustic detection data, it is necessary to correctly identify the species that are detected. Developing an automated species classifier with 100% correct classification rate for any species is likely to stay out of reach. It is therefore necessary to consider the effect of misidentified detections on the number of observed data and consequently on abundance or density estimation, and develop methods to cope with these misidentifications. If misclassification rates are known, it is possible to estimate the true numbers of detected calls without bias. However, misclassification and uncertainties in the level of misclassification increase the variance of the estimates. If the true numbers of calls from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty around the classification probabilities does not have an overly detrimental effect on the overall variance. However, if there is a difference in the encounter rate between species calls and/or a large amount of uncertainty in misclassification rates, then the variance of the estimates becomes very large and this dramatically increases the variance of the final abundance estimate.Dose-response relationships for the onset of avoidance of sonar by free-ranging killer whales
http://hdl.handle.net/10023/5092
Eight experimentally controlled exposures to 1−2 kHz or 6−7 kHz sonar signals were conducted with four killer whale groups. The source level and proximity of the source were increased during each exposure in order to reveal response thresholds. Detailed inspection of movements during each exposure session revealed sustained changes in speed and travel direction judged to be avoidance responses during six of eight sessions. Following methods developed for Phase-I clinical trials in human medicine, response thresholds ranging from 94 to 164 dB re 1 μPa received sound pressure level (SPL) were fitted to Bayesian dose-response functions. Thresholds did not consistently differ by sonar frequency or whether a group had previously been exposed, with a mean SPL response threshold of 142 ± 15 dB (mean ± s.d.). High levels of between- and within-individual variability were identified, indicating that thresholds depended upon other undefined contextual variables. The dose-response functions indicate that some killer whales started to avoid sonar at received SPL below thresholds assumed by the U.S. Navy. The predicted extent of habitat over which avoidance reactions occur depends upon whether whales responded to proximity or received SPL of the sonar or both, but was large enough to raise concerns about biological consequences to the whales.
The authors acknowledge the support of the MASTS pooling initiative (The Marine Alliance for Science and Technology for Scotland) in the completion of this study. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions.
Sat, 01 Feb 2014 00:00:00 GMThttp://hdl.handle.net/10023/50922014-02-01T00:00:00ZMiller, PatrickAntunes, Ricardo NunoWensveen, Paulus JacobusSamarra, Filipa Isabel PereiraAlves, Ana Catarina De CarvalhoTyack, Peter LloydKvadsheim, Petter H.Kleivane, LarsLam, Frans-Peter A.Ainslie, Michael A.Thomas, LenEight experimentally controlled exposures to 1−2 kHz or 6−7 kHz sonar signals were conducted with four killer whale groups. The source level and proximity of the source were increased during each exposure in order to reveal response thresholds. Detailed inspection of movements during each exposure session revealed sustained changes in speed and travel direction judged to be avoidance responses during six of eight sessions. Following methods developed for Phase-I clinical trials in human medicine, response thresholds ranging from 94 to 164 dB re 1 μPa received sound pressure level (SPL) were fitted to Bayesian dose-response functions. Thresholds did not consistently differ by sonar frequency or whether a group had previously been exposed, with a mean SPL response threshold of 142 ± 15 dB (mean ± s.d.). High levels of between- and within-individual variability were identified, indicating that thresholds depended upon other undefined contextual variables. The dose-response functions indicate that some killer whales started to avoid sonar at received SPL below thresholds assumed by the U.S. Navy. The predicted extent of habitat over which avoidance reactions occur depends upon whether whales responded to proximity or received SPL of the sonar or both, but was large enough to raise concerns about biological consequences to the whales.Using hidden Markov models to deal with availability bias on line transect surveys
http://hdl.handle.net/10023/5017
We develop estimators for line transect surveys of animals that are stochastically unavailable for detection while within detection range. The detection process is formulated as a hidden Markov model with a binary state-dependent observation model that depends on both perpendicular and forward distances. This provides a parametric method of dealing with availability bias when estimates of availability process parameters are available even if series of availability events themselves are not. We apply the estimators to an aerial and a shipboard survey of whales, and investigate their properties by simulation. They are shown to be more general and more flexible than existing estimators based on parametric models of the availability process. We also find that methods using availability correction factors can be very biased when surveys are not close to being instantaneous, as can estimators that assume temporal independence in availability when there is temporal dependence.
This work was supported by EPSRC grant EP/I000917/1
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/50172013-01-01T00:00:00ZBorchers, David LouisZucchini, WalterHeide-Jørgensen, M.P.Cañadas, A.Langrock, RolandWe develop estimators for line transect surveys of animals that are stochastically unavailable for detection while within detection range. The detection process is formulated as a hidden Markov model with a binary state-dependent observation model that depends on both perpendicular and forward distances. This provides a parametric method of dealing with availability bias when estimates of availability process parameters are available even if series of availability events themselves are not. We apply the estimators to an aerial and a shipboard survey of whales, and investigate their properties by simulation. They are shown to be more general and more flexible than existing estimators based on parametric models of the availability process. We also find that methods using availability correction factors can be very biased when surveys are not close to being instantaneous, as can estimators that assume temporal independence in availability when there is temporal dependence.An approximate Bayesian method applied to estimating the trajectories of four British grey seal (Halichoerus grypus) populations from pup counts.
http://hdl.handle.net/10023/4688
1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density dependent effects as outputs rather than assumptions. 3. This approach requires fewer assumptions than the state space models currently used, and produces similar estimates. The simplifications made the models easier to fit, reducing their computational intensity and allowing regional differences in demographic parameters to be considered. 4. The approach is not restricted to situations where only a single component of the population is observable, but, particularly in those cases, provides a practical method for extracting information from limited datasets. 5. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/46882011-01-01T00:00:00ZLonergan, Michael EdwardThompson, DavidThomas, Leonard JosephDuck, Callan David1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density dependent effects as outputs rather than assumptions. 3. This approach requires fewer assumptions than the state space models currently used, and produces similar estimates. The simplifications made the models easier to fit, reducing their computational intensity and allowing regional differences in demographic parameters to be considered. 4. The approach is not restricted to situations where only a single component of the population is observable, but, particularly in those cases, provides a practical method for extracting information from limited datasets. 5. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.Modelling group dynamic animal movement
http://hdl.handle.net/10023/4555
1). Group dynamics are a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognized, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However, to date, practical statistical methods, which can include group dynamics in animal movement models, have been lacking. 2). We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multistate random walks. 3). While in simulation experiments parameter estimators exhibit some bias in non-ideal scenarios, we show that generally the estimation of models of this type is both feasible and ecologically informative. 4). We illustrate the approach using real movement data from 11 reindeer (Rangifer tarandus). Results indicate a directional bias towards a group centroid for reindeer in an encamped state. Though the attraction to the group centroid is relatively weak, our model successfully captures group-influenced movement dynamics. Specifically, as compared to a regular mixture of correlated random walks, the group dynamic model more accurately predicts the non-diffusive behaviour of a cohesive mobile group. 5). As technology continues to develop, it will become easier and less expensive to tag multiple individuals within a group in order to follow their movements. Our work provides a first inferential framework for understanding the relative influences of individual versus group-level movement decisions. This framework can be extended to include covariates corresponding to environmental influences or body condition. As such, this framework allows for a broader understanding of the many internal and external factors that can influence an individual's movement.
Sat, 01 Feb 2014 00:00:00 GMThttp://hdl.handle.net/10023/45552014-02-01T00:00:00ZLangrock, RolandHopcraft, GrantBlackwell, PaulGoodall, VictoriaKing, RuthNiu, MuPatterson, TobyPedersen, MartinSkarin, AnnaSchick, Robert Schilling1). Group dynamics are a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognized, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However, to date, practical statistical methods, which can include group dynamics in animal movement models, have been lacking. 2). We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multistate random walks. 3). While in simulation experiments parameter estimators exhibit some bias in non-ideal scenarios, we show that generally the estimation of models of this type is both feasible and ecologically informative. 4). We illustrate the approach using real movement data from 11 reindeer (Rangifer tarandus). Results indicate a directional bias towards a group centroid for reindeer in an encamped state. Though the attraction to the group centroid is relatively weak, our model successfully captures group-influenced movement dynamics. Specifically, as compared to a regular mixture of correlated random walks, the group dynamic model more accurately predicts the non-diffusive behaviour of a cohesive mobile group. 5). As technology continues to develop, it will become easier and less expensive to tag multiple individuals within a group in order to follow their movements. Our work provides a first inferential framework for understanding the relative influences of individual versus group-level movement decisions. This framework can be extended to include covariates corresponding to environmental influences or body condition. As such, this framework allows for a broader understanding of the many internal and external factors that can influence an individual's movement.A risk function for behavioral disruption of Blainville’s beaked whales (Mesoplodon densirostris) from mid-frequency active sonar
http://hdl.handle.net/10023/4522
There is increasing concern about the potential effects of noise pollution on marine life in the world’s oceans. For marine mammals, anthropogenic sounds may cause behavioral disruption, and this can be quantified using a risk function that relates sound exposure to a measured behavioral response. Beaked whales are a taxon of deep diving whales that may be particularly susceptible to naval sonar as the species has been associated with sonar-related mass stranding events. Here we derive the first empirical risk function for Blainville’s beaked whales (Mesoplodon densirostris) by combining in situ data from passive acoustic monitoring of animal vocalizations and navy sonar operations with precise ship tracks and sound field modeling. The hydrophone array at the Atlantic Undersea Test and Evaluation Center, Bahamas, was used to locate vocalizing groups of Blainville’s beaked whales and identify sonar transmissions before, during, and after Mid-Frequency Active (MFA) sonar operations. Sonar transmission times and source levels were combined with ship tracks using a sound propagation model to estimate the received level (RL) at each hydrophone. A generalized additive model was fitted to data to model the presence or absence of the start of foraging dives in 30-minute periods as a function of the corresponding sonar RL at the hydrophone closest to the center of each group. This model was then used to construct a risk function that can be used to estimate the probability of a behavioral change (cessation of foraging) the individual members of a Blainville’s beaked whale population might experience as a function of sonar RL. The function predicts a 0.5 probability of disturbance at a RL of 150dBrms re µPa (CI: 144 to 155) This is 15dB lower than the level used historically by the US Navy in their risk assessments but 10 dB higher than the current 140 dB step-function
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10023/45222014-01-01T00:00:00ZMoretti, DavidThomas, LenMarques, Tiago A.Harwood, JohnDilley, AshleyNeales, BertShaffer, JessicaMccarthy, ENew, Leslie FrancesJarvis, SMorrissey, RonThere is increasing concern about the potential effects of noise pollution on marine life in the world’s oceans. For marine mammals, anthropogenic sounds may cause behavioral disruption, and this can be quantified using a risk function that relates sound exposure to a measured behavioral response. Beaked whales are a taxon of deep diving whales that may be particularly susceptible to naval sonar as the species has been associated with sonar-related mass stranding events. Here we derive the first empirical risk function for Blainville’s beaked whales (Mesoplodon densirostris) by combining in situ data from passive acoustic monitoring of animal vocalizations and navy sonar operations with precise ship tracks and sound field modeling. The hydrophone array at the Atlantic Undersea Test and Evaluation Center, Bahamas, was used to locate vocalizing groups of Blainville’s beaked whales and identify sonar transmissions before, during, and after Mid-Frequency Active (MFA) sonar operations. Sonar transmission times and source levels were combined with ship tracks using a sound propagation model to estimate the received level (RL) at each hydrophone. A generalized additive model was fitted to data to model the presence or absence of the start of foraging dives in 30-minute periods as a function of the corresponding sonar RL at the hydrophone closest to the center of each group. This model was then used to construct a risk function that can be used to estimate the probability of a behavioral change (cessation of foraging) the individual members of a Blainville’s beaked whale population might experience as a function of sonar RL. The function predicts a 0.5 probability of disturbance at a RL of 150dBrms re µPa (CI: 144 to 155) This is 15dB lower than the level used historically by the US Navy in their risk assessments but 10 dB higher than the current 140 dB step-functionNovel methods for species distribution mapping including spatial models in complex regions
http://hdl.handle.net/10023/4514
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment.
The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets.
Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting.
There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises.
Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the
Scottish network of Nature Conservation Marine Protected Areas.
Tue, 05 Nov 2013 00:00:00 GMThttp://hdl.handle.net/10023/45142013-11-05T00:00:00ZScott-Hayward, Lindesay Alexandra SarahSpecies Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment.
The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets.
Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting.
There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises.
Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the
Scottish network of Nature Conservation Marine Protected Areas.Modelling catch sampling uncertainty in fisheries stock assessment : the Atlantic-Iberian sardine case
http://hdl.handle.net/10023/4474
The statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS)
stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from
sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment
models so that management decisions are based on realistic evaluation of the uncertainty about
the status of the stock. The main goal of this study is to develop a stock assessment framework
that accounts for some of the uncertainties associated with the AIS stock that are currently not
integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty
arising from the catch data sampling process.
The central innovation the thesis is the development of a Bayesian integrated stock assessment
(ISA) model, in which an observation model explicitly links stock dynamics parameters
with statistical models for the various types of data observed from catches of the AIS stock.
This allows for systematic and statistically consistent propagation of the uncertainty inherent in
the catch sampling process across the whole stock assessment model, through to estimates of
biomass and stock parameters. The method is tested by simulations and found to provide reliable
and accurate estimates of stock parameters and associated uncertainty, while also outperforming
existing designed-based and model-based estimation approaches.
The method is computationally very demanding and this is an obstacle to its adoption
by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed
and presented in this thesis could provide an important contribution to the improvement in the
evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other
fish stock with similar data and dynamics structure. Furthermore, the models developed in this
study establish a solid conceptual platform to allow future development of more complex models
of fish population dynamics.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/44742013-01-01T00:00:00ZCaneco, BrunoThe statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS)
stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from
sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment
models so that management decisions are based on realistic evaluation of the uncertainty about
the status of the stock. The main goal of this study is to develop a stock assessment framework
that accounts for some of the uncertainties associated with the AIS stock that are currently not
integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty
arising from the catch data sampling process.
The central innovation the thesis is the development of a Bayesian integrated stock assessment
(ISA) model, in which an observation model explicitly links stock dynamics parameters
with statistical models for the various types of data observed from catches of the AIS stock.
This allows for systematic and statistically consistent propagation of the uncertainty inherent in
the catch sampling process across the whole stock assessment model, through to estimates of
biomass and stock parameters. The method is tested by simulations and found to provide reliable
and accurate estimates of stock parameters and associated uncertainty, while also outperforming
existing designed-based and model-based estimation approaches.
The method is computationally very demanding and this is an obstacle to its adoption
by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed
and presented in this thesis could provide an important contribution to the improvement in the
evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other
fish stock with similar data and dynamics structure. Furthermore, the models developed in this
study establish a solid conceptual platform to allow future development of more complex models
of fish population dynamics.Using energetic models to investigate the survival and reproduction of beaked whales (family Ziphiidae)
http://hdl.handle.net/10023/4053
Mass stranding of several species of beaked whales (family Ziphiidae) associated with exposure to anthropogenic sounds has raised concern for the conservation of these species. However, little is known about the species’ life histories, prey or habitat requirements. Without this knowledge, it becomes difficult to assess the effects of anthropogenic sound, since there is no way to determine whether the disturbance is impacting the species’ physical or environmental requirements. Here we take a bioenergetics approach to address this gap in our knowledge, as the elusive, deep-diving nature of beaked whales has made it hard to study these effects directly. We develop a model for Ziphiidae linking feeding energetics to the species’ requirements for survival and reproduction, since these life history traits would be the most likely to be impacted by non-lethal disturbances. Our models suggest that beaked whale reproduction requires energy dense prey, and that poor resource availability would lead to an extension of the inter-calving interval. Further, given current information, it seems that some beaked whale species require relatively high quality habitat in order to meet their requirements for survival and reproduction. As a result, even a small non-lethal disturbance that results in displacement of whales from preferred habitats could potentially impact a population if a significant proportion of that population was affected. We explored the impact of varying ecological parameters and model assumptions on survival and reproduction, and find that calf and fetus survival appear more readily affected than the survival of adult females.
Wed, 17 Jul 2013 00:00:00 GMThttp://hdl.handle.net/10023/40532013-07-17T00:00:00ZNew, Leslie FrancesMoretti, DavidHooker, Sascha KateCosta, Daniel P.Simmons, Samantha E.Mass stranding of several species of beaked whales (family Ziphiidae) associated with exposure to anthropogenic sounds has raised concern for the conservation of these species. However, little is known about the species’ life histories, prey or habitat requirements. Without this knowledge, it becomes difficult to assess the effects of anthropogenic sound, since there is no way to determine whether the disturbance is impacting the species’ physical or environmental requirements. Here we take a bioenergetics approach to address this gap in our knowledge, as the elusive, deep-diving nature of beaked whales has made it hard to study these effects directly. We develop a model for Ziphiidae linking feeding energetics to the species’ requirements for survival and reproduction, since these life history traits would be the most likely to be impacted by non-lethal disturbances. Our models suggest that beaked whale reproduction requires energy dense prey, and that poor resource availability would lead to an extension of the inter-calving interval. Further, given current information, it seems that some beaked whale species require relatively high quality habitat in order to meet their requirements for survival and reproduction. As a result, even a small non-lethal disturbance that results in displacement of whales from preferred habitats could potentially impact a population if a significant proportion of that population was affected. We explored the impact of varying ecological parameters and model assumptions on survival and reproduction, and find that calf and fetus survival appear more readily affected than the survival of adult females.Spatial models for distance sampling data : recent developments and future directions
http://hdl.handle.net/10023/4046
Our understanding of a biological population can be greatly enhanced by modelling their distribution in space and as a function of environmental covariates. Density surface models consist of a spatial model of the abundance of a biological population which has been corrected for uncertain detection via distance sampling methods. We offer a comparison of recent advances in the field and consider the likely directions of future research. In particular we consider spatial modelling techniques that may be advantageous to applied ecologists such as quantification of uncertainty in a two-stage model and smoothing in areas with complex boundaries. The methods discussed are available in an \textsf{R} package developed by the authors and are largely implemented in the popular Windows package Distance (or are soon to be incorporated). Density surface modelling enables applied ecologists to reliably estimate abundances and create maps of animal/plant distribution. Such models can also be used to investigate the relationships between distribution and environmental covariates.
Fri, 01 Nov 2013 00:00:00 GMThttp://hdl.handle.net/10023/40462013-11-01T00:00:00ZMiller, David LawrenceBurt, M LouiseRexstad, EricThomas, LenOur understanding of a biological population can be greatly enhanced by modelling their distribution in space and as a function of environmental covariates. Density surface models consist of a spatial model of the abundance of a biological population which has been corrected for uncertain detection via distance sampling methods. We offer a comparison of recent advances in the field and consider the likely directions of future research. In particular we consider spatial modelling techniques that may be advantageous to applied ecologists such as quantification of uncertainty in a two-stage model and smoothing in areas with complex boundaries. The methods discussed are available in an \textsf{R} package developed by the authors and are largely implemented in the popular Windows package Distance (or are soon to be incorporated). Density surface modelling enables applied ecologists to reliably estimate abundances and create maps of animal/plant distribution. Such models can also be used to investigate the relationships between distribution and environmental covariates.Estimating resource acquisition and at-sea body condition of a marine predator
http://hdl.handle.net/10023/3867
(1) Body condition plays a fundamental role in many ecological and evolutionary processes at a variety of scales and across a broad range of animal taxa. An understanding of how body condition changes at fine spatial and temporal scales as a result of interaction with the environment provides necessary information about how animals acquire resources. (2) However, comparatively little is known about intra- and interindividual variation of condition in marine systems. Where condition has been studied, changes typically are recorded at relatively coarse time-scales. By quantifying how fine-scale interaction with the environment influences condition, we can broaden our understanding of how animals acquire resources and allocate them to body stores. (3) Here we used a hierarchical Bayesian state-space model to estimate the body condition as measured by the size of an animal's lipid store in two closely related species of marine predator that occupy different hemispheres: northern elephant seals (Mirounga angustirostris) and southern elephant seals (Mirounga leonina). The observation model linked drift dives to lipid stores. The process model quantified daily changes in lipid stores as a function of the physiological condition of the seal (lipid:lean tissue ratio, departure lipid and departure mass), its foraging location, two measures of behaviour and environmental covariates. (4) We found that physiological condition significantly impacted lipid gain at two time-scales – daily and at departure from the colony – that foraging location was significantly associated with lipid gain in both species of elephant seals and that long-term behavioural phase was associated with positive lipid gain in northern and southern elephant seals. In northern elephant seals, the occurrence of short-term behavioural states assumed to represent foraging were correlated with lipid gain. Lipid gain was a function of covariates in both species. Southern elephant seals performed fewer drift dives than northern elephant seals and gained lipids at a lower rate. (5) We have demonstrated a new way to obtain time series of body condition estimates for a marine predator at fine spatial and temporal scales. This modelling approach accounts for uncertainty at many levels and has the potential to integrate physiological and movement ecology of top predators. The observation model we used was specific to elephant seals, but the process model can readily be applied to other species, providing an opportunity to understand how animals respond to their environment at a fine spatial scale.
This article was made open access through BIS OA funding.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/38672013-01-01T00:00:00ZSchick, Robert SchillingNew, LeslieThomas, LenCosta, DanielHindell, MarkMcMahon, CliveRobinson, PatrickSimmons, SamanthaThums, MicheleHarwood, JohnClark, James(1) Body condition plays a fundamental role in many ecological and evolutionary processes at a variety of scales and across a broad range of animal taxa. An understanding of how body condition changes at fine spatial and temporal scales as a result of interaction with the environment provides necessary information about how animals acquire resources. (2) However, comparatively little is known about intra- and interindividual variation of condition in marine systems. Where condition has been studied, changes typically are recorded at relatively coarse time-scales. By quantifying how fine-scale interaction with the environment influences condition, we can broaden our understanding of how animals acquire resources and allocate them to body stores. (3) Here we used a hierarchical Bayesian state-space model to estimate the body condition as measured by the size of an animal's lipid store in two closely related species of marine predator that occupy different hemispheres: northern elephant seals (Mirounga angustirostris) and southern elephant seals (Mirounga leonina). The observation model linked drift dives to lipid stores. The process model quantified daily changes in lipid stores as a function of the physiological condition of the seal (lipid:lean tissue ratio, departure lipid and departure mass), its foraging location, two measures of behaviour and environmental covariates. (4) We found that physiological condition significantly impacted lipid gain at two time-scales – daily and at departure from the colony – that foraging location was significantly associated with lipid gain in both species of elephant seals and that long-term behavioural phase was associated with positive lipid gain in northern and southern elephant seals. In northern elephant seals, the occurrence of short-term behavioural states assumed to represent foraging were correlated with lipid gain. Lipid gain was a function of covariates in both species. Southern elephant seals performed fewer drift dives than northern elephant seals and gained lipids at a lower rate. (5) We have demonstrated a new way to obtain time series of body condition estimates for a marine predator at fine spatial and temporal scales. This modelling approach accounts for uncertainty at many levels and has the potential to integrate physiological and movement ecology of top predators. The observation model we used was specific to elephant seals, but the process model can readily be applied to other species, providing an opportunity to understand how animals respond to their environment at a fine spatial scale.Evidence for density-dependent changes in body condition and pregnancy rate of North Atlantic fin whales over four decades of varying environmental conditions
http://hdl.handle.net/10023/3854
A central theme in ecology is the search for pattern in the response of a species to changing environmental conditions. Natural resource management and endangered species conservation require an understanding of density-dependent and density-independent factors that regulate populations. Marine mammal populations are expected to express density dependence in the same way as terrestrial mammals, but logistical difficulties in data acquisition for many large whale species have hindered attempts to identify population-regulation mechanisms. We explored relationships between body condition (inferred from patterns in blubber thickness) and per capita prey abundance, and between pregnancy rate and body condition in North Atlantic fin whales as environmental conditions and population size varied between 1967 and 2010. Blubber thickness in both males and females declined at low per capita prey availability, and in breeding-age females, pregnancy rate declined at low blubber thickness, demonstrating a density-dependent response of pregnancy to prey limitation mediated through body condition. To the best of our knowledge, this is the first time a quantitative relationship among per capita prey abundance, body condition, and pregnancy rate has been documented for whales. As long-lived predators, marine mammals can act as indicators of the state of marine ecosystems. Improving our understanding of the relationships that link prey, body condition, and population parameters such as pregnancy rate and survival will become increasingly useful as these systems are affected by natural and anthropogenic change. Quantifying linkages among prey, fitness and vital rates will improve our ability to predict population consequences of subtle, sublethal impacts of ocean noise and other anthropogenic stressors.
Fri, 01 Mar 2013 00:00:00 GMThttp://hdl.handle.net/10023/38542013-03-01T00:00:00ZWilliams, RobertVikingsson, Gisli A.Gislason, AstthorLockyer, ChristinaNew, LeslieThomas, LenHammond, Philip StevenA central theme in ecology is the search for pattern in the response of a species to changing environmental conditions. Natural resource management and endangered species conservation require an understanding of density-dependent and density-independent factors that regulate populations. Marine mammal populations are expected to express density dependence in the same way as terrestrial mammals, but logistical difficulties in data acquisition for many large whale species have hindered attempts to identify population-regulation mechanisms. We explored relationships between body condition (inferred from patterns in blubber thickness) and per capita prey abundance, and between pregnancy rate and body condition in North Atlantic fin whales as environmental conditions and population size varied between 1967 and 2010. Blubber thickness in both males and females declined at low per capita prey availability, and in breeding-age females, pregnancy rate declined at low blubber thickness, demonstrating a density-dependent response of pregnancy to prey limitation mediated through body condition. To the best of our knowledge, this is the first time a quantitative relationship among per capita prey abundance, body condition, and pregnancy rate has been documented for whales. As long-lived predators, marine mammals can act as indicators of the state of marine ecosystems. Improving our understanding of the relationships that link prey, body condition, and population parameters such as pregnancy rate and survival will become increasingly useful as these systems are affected by natural and anthropogenic change. Quantifying linkages among prey, fitness and vital rates will improve our ability to predict population consequences of subtle, sublethal impacts of ocean noise and other anthropogenic stressors.First direct measurements of behavioural responses by Cuvier's beaked whales to mid-frequency active sonar
http://hdl.handle.net/10023/3836
Most marine mammal strandings coincident with naval sonar exercises have involved Cuvier's beaked whales (Ziphius cavirostris). We recorded animal movement and acoustic data on two tagged Ziphius and obtained the first direct measurements of behavioural responses of this species to mid-frequency active (MFA) sonar signals. Each recording included a 30-min playback (one 1.6-s simulated MFA sonar signal repeated every 25 s); one whale was also incidentally exposed to MFA sonar from distant naval exercises. Whales responded strongly to playbacks at low received levels (RLs; 89–127 dB re 1 µPa): after ceasing normal fluking and echolocation, they swam rapidly, silently away, extending both dive duration and subsequent non-foraging interval. Distant sonar exercises (78–106 dB re 1 µPa) did not elicit such responses, suggesting that context may moderate reactions. The observed responses to playback occurred at RLs well below current regulatory thresholds; equivalent responses to operational sonars could elevate stranding risk and reduce foraging efficiency.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/38362013-01-01T00:00:00ZDe Ruiter, Stacy LynnSouthall, Brandon L.Calambokidis, JohnZimmer, Walter M. X.Sadykova, DinaraFalcone, Erin A.Friedlaender, Ari S.Joseph, John E.Moretti, DavidSchorr, Gregory S.Thomas, LenTyack, Peter LloydMost marine mammal strandings coincident with naval sonar exercises have involved Cuvier's beaked whales (Ziphius cavirostris). We recorded animal movement and acoustic data on two tagged Ziphius and obtained the first direct measurements of behavioural responses of this species to mid-frequency active (MFA) sonar signals. Each recording included a 30-min playback (one 1.6-s simulated MFA sonar signal repeated every 25 s); one whale was also incidentally exposed to MFA sonar from distant naval exercises. Whales responded strongly to playbacks at low received levels (RLs; 89–127 dB re 1 µPa): after ceasing normal fluking and echolocation, they swam rapidly, silently away, extending both dive duration and subsequent non-foraging interval. Distant sonar exercises (78–106 dB re 1 µPa) did not elicit such responses, suggesting that context may moderate reactions. The observed responses to playback occurred at RLs well below current regulatory thresholds; equivalent responses to operational sonars could elevate stranding risk and reduce foraging efficiency.Estimating wildlife distribution and abundance from line transect surveys conducted from platforms of opportunity
http://hdl.handle.net/10023/3727
Line transect data obtained from 'platforms of opportunity' are useful for the monitoring
of long term trends in dolphin populations which occur over vast areas, yet analyses of
such data axe problematic due to violation of fundamental assumptions of line transect
methodology. In this thesis we develop methods which allow estimates of dolphin relative
abundance to be obtained when certain assumptions of line transect sampling are violated.
Generalised additive models are used to model encounter rate and mean school size as
a function of spatially and temporally referenced covariates. The estimated relationship
between the response and the environmental and locational covariates is then used to
obtain a predicted surface for the response over the entire survey region. Given those
predicted surfaces, a density surface can then be obtained and an estimate of abundance
computed by numerically integrating over the entire survey region. This approach is
particularly useful when search effort is not random, in which case standard line transect
methods would yield biased estimates.
Estimates of f (0) (the inverse of the effective strip (half-)width), an essential component
of the line transect estimator, may also be biased due to heterogeneity in detection probabilities.
We developed a conditional likelihood approach in which covariate effects are
directly incorporated into the estimation procedure. Simulation results indicated that the
method performs well in the presence of size-bias. When multiple covariates are used, it
is important that covariate selection be carried out.
As an example we applied the methods described above to eastern tropical Pacific dolphin
stocks. However, uncertainty in stock identification has never been directly incorporated
into methods used to obtain estimates of relative or absolute abundance. Therefore we
illustrate an approach in which trends in dolphin relative abundance axe monitored by
small areas, rather than stocks.
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/37272001-01-01T00:00:00ZMarques, Fernanda F. C.Line transect data obtained from 'platforms of opportunity' are useful for the monitoring
of long term trends in dolphin populations which occur over vast areas, yet analyses of
such data axe problematic due to violation of fundamental assumptions of line transect
methodology. In this thesis we develop methods which allow estimates of dolphin relative
abundance to be obtained when certain assumptions of line transect sampling are violated.
Generalised additive models are used to model encounter rate and mean school size as
a function of spatially and temporally referenced covariates. The estimated relationship
between the response and the environmental and locational covariates is then used to
obtain a predicted surface for the response over the entire survey region. Given those
predicted surfaces, a density surface can then be obtained and an estimate of abundance
computed by numerically integrating over the entire survey region. This approach is
particularly useful when search effort is not random, in which case standard line transect
methods would yield biased estimates.
Estimates of f (0) (the inverse of the effective strip (half-)width), an essential component
of the line transect estimator, may also be biased due to heterogeneity in detection probabilities.
We developed a conditional likelihood approach in which covariate effects are
directly incorporated into the estimation procedure. Simulation results indicated that the
method performs well in the presence of size-bias. When multiple covariates are used, it
is important that covariate selection be carried out.
As an example we applied the methods described above to eastern tropical Pacific dolphin
stocks. However, uncertainty in stock identification has never been directly incorporated
into methods used to obtain estimates of relative or absolute abundance. Therefore we
illustrate an approach in which trends in dolphin relative abundance axe monitored by
small areas, rather than stocks.Bayesian point process modelling of ecological communities
http://hdl.handle.net/10023/3710
The modelling of biological communities is important to further the understanding
of species coexistence and the mechanisms involved in maintaining
biodiversity. This involves considering not only interactions between individual
biological organisms, but also the incorporation of covariate information,
if available, in the modelling process. This thesis explores the use
of point processes to model interactions in bivariate point patterns within
a Bayesian framework, and, where applicable, in conjunction with covariate
data. Specifically, we distinguish between symmetric and asymmetric species
interactions and model these using appropriate point processes. In this thesis
we consider both pairwise and area interaction point processes to allow for
inhibitory interactions and both inhibitory and attractive interactions.
It is envisaged that the analyses and innovations presented in this thesis
will contribute to the parsimonious modelling of biological communities.
Fri, 28 Jun 2013 00:00:00 GMThttp://hdl.handle.net/10023/37102013-06-28T00:00:00ZNightingale, Glenna FaithThe modelling of biological communities is important to further the understanding
of species coexistence and the mechanisms involved in maintaining
biodiversity. This involves considering not only interactions between individual
biological organisms, but also the incorporation of covariate information,
if available, in the modelling process. This thesis explores the use
of point processes to model interactions in bivariate point patterns within
a Bayesian framework, and, where applicable, in conjunction with covariate
data. Specifically, we distinguish between symmetric and asymmetric species
interactions and model these using appropriate point processes. In this thesis
we consider both pairwise and area interaction point processes to allow for
inhibitory interactions and both inhibitory and attractive interactions.
It is envisaged that the analyses and innovations presented in this thesis
will contribute to the parsimonious modelling of biological communities.Animal population estimation using mark-recapture and plant-capture
http://hdl.handle.net/10023/3655
Mark-recapture is a method of population estimation that involves capturing a number
of animals from a population of unknown size on several occasions, and marking
those animals that are caught each time. By observing the number of marked
animals that are subsequently seen, estimates of the total population size can be
made. There are various subclasses of the mark-recapture method called the Otis-class
of models (Otis, Burnham, White & Anderson 1978). These relate to the
assumed behaviour of the individuals in the target population.
More recent work has generalised the theory of mark-recapture to the so-called
plant-capture, where a known number of animals are pre-inserted into the target
population. Sampling is then carried out as normal, but with additional information
coming from knowledge of the number of planted individuals.
The theory underpinning plant-capture is less well-developed than mark-recapture,
with the difference on population estimation of the former over the latter not often
tested. This thesis shows that, under fixed and random sample-size models, the
inclusion of plants can improve the mean point population estimation of various
estimators. The estimator of Pathak (1964) is generalised to allow for the inclusion
of plants into the target population. The results show that mean estimates from
most estimators, under most models, can be improved with the inclusion of plants,
and the sample standard deviations of the simulations can be reduced. This improvement
in mean point population estimation is particularly pronounced when
the number of animals captured is low.
Sample coverage, which is the proportion of distinct animals caught during sampling,
is also often sought by practitioners. Given here is a generalisation of the
inverse population estimator of Pathak (1964) to plant-capture and a proposed new
inverse population estimator, which can be used as estimates of the coverage of a
sample.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/36552012-01-01T00:00:00ZGormley, RichardMark-recapture is a method of population estimation that involves capturing a number
of animals from a population of unknown size on several occasions, and marking
those animals that are caught each time. By observing the number of marked
animals that are subsequently seen, estimates of the total population size can be
made. There are various subclasses of the mark-recapture method called the Otis-class
of models (Otis, Burnham, White & Anderson 1978). These relate to the
assumed behaviour of the individuals in the target population.
More recent work has generalised the theory of mark-recapture to the so-called
plant-capture, where a known number of animals are pre-inserted into the target
population. Sampling is then carried out as normal, but with additional information
coming from knowledge of the number of planted individuals.
The theory underpinning plant-capture is less well-developed than mark-recapture,
with the difference on population estimation of the former over the latter not often
tested. This thesis shows that, under fixed and random sample-size models, the
inclusion of plants can improve the mean point population estimation of various
estimators. The estimator of Pathak (1964) is generalised to allow for the inclusion
of plants into the target population. The results show that mean estimates from
most estimators, under most models, can be improved with the inclusion of plants,
and the sample standard deviations of the simulations can be reduced. This improvement
in mean point population estimation is particularly pronounced when
the number of animals captured is low.
Sample coverage, which is the proportion of distinct animals caught during sampling,
is also often sought by practitioners. Given here is a generalisation of the
inverse population estimator of Pathak (1964) to plant-capture and a proposed new
inverse population estimator, which can be used as estimates of the coverage of a
sample.Estimating anglerfish abundance from trawl surveys, and related problems
http://hdl.handle.net/10023/3652
The content of this thesis was motivated by the need to estimate anglerfish abundance
from stratified random trawl surveys of the anglerfish stock which occupies
the northern European shelf (Fernandes et al., 2007). The survey was conducted
annually from 2005 to 2010 in order to obtain age-structured estimates of absolute
abundance for this stock. An estimation method is considered to incorporate statistical models for herding, length-based net retention probability and missing age data and uncertainty from all of these sources in variance estimation.
A key component of abundance estimation is the estimation of capture probability.
Capture probability is estimated from the experimental survey data using various
logistic regression models with haul as a random effect. Conditional on the estimated
capture probability, a number of abundance estimators are developed and applied to
the anglerfish data. The abundance estimators differ in the way that the haul effect is incorporated. The performance of these estimators is investigated by simulation. An estimator with form similar to that conventionally used to estimate abundance from distance sampling surveys is found to perform best.
The estimators developed for the anglerfish survey data which incorporate random
effects in capture probability have wider application than trawl surveys. We examine
the analytic properties of these estimators when the capture/detection probability is
known. We apply these estimators to three different types of survey data in addition
to the anglerfish data, with different forms of random effects and investigate their
performance by simulation. We find that a generalization of the form of estimator
typically used on line transect surveys performs best overall. It has low bias, and
also the lowest bias and mean squared error among all the estimators we considered.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/36522012-01-01T00:00:00ZYuan, YuanThe content of this thesis was motivated by the need to estimate anglerfish abundance
from stratified random trawl surveys of the anglerfish stock which occupies
the northern European shelf (Fernandes et al., 2007). The survey was conducted
annually from 2005 to 2010 in order to obtain age-structured estimates of absolute
abundance for this stock. An estimation method is considered to incorporate statistical models for herding, length-based net retention probability and missing age data and uncertainty from all of these sources in variance estimation.
A key component of abundance estimation is the estimation of capture probability.
Capture probability is estimated from the experimental survey data using various
logistic regression models with haul as a random effect. Conditional on the estimated
capture probability, a number of abundance estimators are developed and applied to
the anglerfish data. The abundance estimators differ in the way that the haul effect is incorporated. The performance of these estimators is investigated by simulation. An estimator with form similar to that conventionally used to estimate abundance from distance sampling surveys is found to perform best.
The estimators developed for the anglerfish survey data which incorporate random
effects in capture probability have wider application than trawl surveys. We examine
the analytic properties of these estimators when the capture/detection probability is
known. We apply these estimators to three different types of survey data in addition
to the anglerfish data, with different forms of random effects and investigate their
performance by simulation. We find that a generalization of the form of estimator
typically used on line transect surveys performs best overall. It has low bias, and
also the lowest bias and mean squared error among all the estimators we considered.Mixed effect models in distance sampling
http://hdl.handle.net/10023/3618
Recently, much effort has been expended for improving conventional distance sampling methods, e.g. by replacing the design-based approach with a model-based approach where observed counts are related to environmental covariates (Hedley and Buckland, 2004) or by incorporating covariates in the detection function model (Marques and Buckland, 2003).
While these models have generally been limited to include fixed effects, we propose
four different methods for analysing distance sampling data using mixed effects models. These include an extension of the two-stage approach (Buckland et al., 2009),
where we include site random effects in the second-stage count model to account for
correlated counts at the same sites. We also present two integrated approaches which
include site random effects in the count model. These approaches combine the analysis stages for the detection and count models and allow simultaneous estimation of all
parameters. Furthermore, we develop a detection function model that incorporates
random effects. We also propose a novel Bayesian approach to analysing distance sampling data which uses a Metropolis-Hastings algorithm for updating model parameters and a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm for assessing model uncertainty. Lastly, we propose using hierarchical centering as a novel technique for improving model mixing and hence facilitating an RJMCMC algorithm for mixed models.
We analyse two case studies, both large-scale point transect surveys, where the interest lies in establishing the effects of conservation buffers on agricultural fields. For each case study, we compare the results from one integrated approach to those from
the extended two-stage approach. We find that these may differ in parameter estimates for covariates that were both in the detection and the count model and in model probabilities when model uncertainty was included in inference. The performance of the random effects based detection function is assessed via simulation and when heterogeneity in the data is present, one of the new estimators yields improved results compared to conventional distance sampling estimators.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/36182013-01-01T00:00:00ZOedekoven, Cornelia SabrinaRecently, much effort has been expended for improving conventional distance sampling methods, e.g. by replacing the design-based approach with a model-based approach where observed counts are related to environmental covariates (Hedley and Buckland, 2004) or by incorporating covariates in the detection function model (Marques and Buckland, 2003).
While these models have generally been limited to include fixed effects, we propose
four different methods for analysing distance sampling data using mixed effects models. These include an extension of the two-stage approach (Buckland et al., 2009),
where we include site random effects in the second-stage count model to account for
correlated counts at the same sites. We also present two integrated approaches which
include site random effects in the count model. These approaches combine the analysis stages for the detection and count models and allow simultaneous estimation of all
parameters. Furthermore, we develop a detection function model that incorporates
random effects. We also propose a novel Bayesian approach to analysing distance sampling data which uses a Metropolis-Hastings algorithm for updating model parameters and a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm for assessing model uncertainty. Lastly, we propose using hierarchical centering as a novel technique for improving model mixing and hence facilitating an RJMCMC algorithm for mixed models.
We analyse two case studies, both large-scale point transect surveys, where the interest lies in establishing the effects of conservation buffers on agricultural fields. For each case study, we compare the results from one integrated approach to those from
the extended two-stage approach. We find that these may differ in parameter estimates for covariates that were both in the detection and the count model and in model probabilities when model uncertainty was included in inference. The performance of the random effects based detection function is assessed via simulation and when heterogeneity in the data is present, one of the new estimators yields improved results compared to conventional distance sampling estimators.Estimating animal population density using passive acoustics
http://hdl.handle.net/10023/3496
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here
Wed, 01 May 2013 00:00:00 GMThttp://hdl.handle.net/10023/34962013-05-01T00:00:00ZMarques, Tiago A.Thomas, LenMartin, StephenMellinger, DavidWard, JessicaMoretti, DavidHarris, Danielle VeronicaTyack, Peter LloydReliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented hereDecomposition tables for experiments. II. Two–one randomizations
http://hdl.handle.net/10023/3479
We investigate structure for pairs of randomizations that do not follow each other in a chain. These are unrandomized-inclusive, independent, coincident or double randomizations. This involves taking several structures that satisfy particular relations and combining them to form the appropriate orthogonal decomposition of the data space for the experiment. We show how to establish the decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated. This leads to recommendations for when the different types of multiple randomization should be used.
Fri, 01 Oct 2010 00:00:00 GMThttp://hdl.handle.net/10023/34792010-10-01T00:00:00ZBrien, C. J.Bailey, Rosemary AnneWe investigate structure for pairs of randomizations that do not follow each other in a chain. These are unrandomized-inclusive, independent, coincident or double randomizations. This involves taking several structures that satisfy particular relations and combining them to form the appropriate orthogonal decomposition of the data space for the experiment. We show how to establish the decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated. This leads to recommendations for when the different types of multiple randomization should be used.Decomposition tables for experiments I. A chain of randomizations
http://hdl.handle.net/10023/3478
One aspect of evaluating the design for an experiment is the discovery of the relationships between subspaces of the data space. Initially we establish the notation and methods for evaluating an experiment with a single randomization. Starting with two structures, or orthogonal decompositions of the data space, we describe how to combine them to form the overall decomposition for a single-randomization experiment that is "structure balanced." The relationships between the two structures are characterized using efficiency factors. The decomposition is encapsulated in a decomposition table. Then, for experiments that involve multiple randomizations forming a chain, we take several structures that pairwise are structure balanced and combine them to establish the form of the orthogonal decomposition for the experiment. In particular, it is proven that the properties of the design for Such an experiment are derived in a straightforward manner from those of the individual designs. We show how to formulate an extended decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated.
Tue, 01 Dec 2009 00:00:00 GMThttp://hdl.handle.net/10023/34782009-12-01T00:00:00ZBrien, C. J.Bailey, Rosemary AnneOne aspect of evaluating the design for an experiment is the discovery of the relationships between subspaces of the data space. Initially we establish the notation and methods for evaluating an experiment with a single randomization. Starting with two structures, or orthogonal decompositions of the data space, we describe how to combine them to form the overall decomposition for a single-randomization experiment that is "structure balanced." The relationships between the two structures are characterized using efficiency factors. The decomposition is encapsulated in a decomposition table. Then, for experiments that involve multiple randomizations forming a chain, we take several structures that pairwise are structure balanced and combine them to establish the form of the orthogonal decomposition for the experiment. In particular, it is proven that the properties of the design for Such an experiment are derived in a straightforward manner from those of the individual designs. We show how to formulate an extended decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated.Quantifying biodiversity trends in time and space
http://hdl.handle.net/10023/3414
The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity.
Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability.
Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species.
Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time.
Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.
Fri, 30 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/34142012-11-30T00:00:00ZStudeny, Angelika C.The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity.
Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability.
Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species.
Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time.
Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.Finite and infinite ergodic theory for linear and conformal dynamical systems
http://hdl.handle.net/10023/3220
The first main topic of this thesis is the thorough analysis of two families of piecewise linear
maps on the unit interval, the α-Lüroth and α-Farey maps. Here, α denotes a countably infinite
partition of the unit interval whose atoms only accumulate at the origin. The basic properties
of these maps will be developed, including that each α-Lüroth map (denoted Lα) gives rise to a
series expansion of real numbers in [0,1], a certain type of Generalised Lüroth Series. The first
example of such an expansion was given by Lüroth. The map Lα is the jump transformation
of the corresponding α-Farey map Fα. The maps Lα and Fα share the same relationship as the
classical Farey and Gauss maps which give rise to the continued fraction expansion of a real
number. We also consider the topological properties of Fα and some Diophantine-type sets of
numbers expressed in terms of the α-Lüroth expansion.
Next we investigate certain ergodic-theoretic properties of the maps Lα and Fα. It will turn
out that the Lebesgue measure λ is invariant for every map Lα and that there exists a unique
Lebesgue-absolutely continuous invariant measure for Fα. We will give a precise expression for
the density of this measure. Our main result is that both Lα and Fα are exact, and thus ergodic.
The interest in the invariant measure for Fα lies in the fact that under a particular condition on
the underlying partition α, the invariant measure associated to the map Fα is infinite.
Then we proceed to introduce and examine the sequence of α-sum-level sets arising from
the α-Lüroth map, for an arbitrary given partition α. These sets can be written dynamically in
terms of Fα. The main result concerning the α-sum-level sets is to establish weak and strong
renewal laws. Note that for the Farey map and the Gauss map, the analogue of this result has
been obtained by Kesseböhmer and Stratmann. There the results were derived by using advanced
infinite ergodic theory, rather than the strong renewal theorems employed here. This underlines
the fact that one of the main ingredients of infinite ergodic theory is provided by some delicate
estimates in renewal theory.
Our final main result concerning the α-Lüroth and α-Farey systems is to provide a fractal-geometric
description of the Lyapunov spectra associated with each of the maps Lα and Fα.
The Lyapunov spectra for the Farey map and the Gauss map have been investigated in detail by
Kesseböhmer and Stratmann. The Farey map and the Gauss map are non-linear, whereas the
systems we consider are always piecewise linear. However, since our analysis is based on a large
family of different partitions of U , the class of maps which we consider in this paper allows us
to detect a variety of interesting new phenomena, including that of phase transitions.
Finally, we come to the conformal systems of the title. These are the limit sets of discrete
subgroups of the group of isometries of the hyperbolic plane. For these so-called Fuchsian
groups, our first main result is to establish the Hausdorff dimension of some Diophantine-type
sets contained in the limit set that are similar to those considered for the maps Lα. These sets
are then used in our second main result to analyse the more geometrically defined strict-Jarník
limit set of a Fuchsian group. Finally, we obtain a “weak multifractal spectrum” for the Patterson
measure associated to the Fuchsian group.
Wed, 30 Nov 2011 00:00:00 GMThttp://hdl.handle.net/10023/32202011-11-30T00:00:00ZMunday, SaraThe first main topic of this thesis is the thorough analysis of two families of piecewise linear
maps on the unit interval, the α-Lüroth and α-Farey maps. Here, α denotes a countably infinite
partition of the unit interval whose atoms only accumulate at the origin. The basic properties
of these maps will be developed, including that each α-Lüroth map (denoted Lα) gives rise to a
series expansion of real numbers in [0,1], a certain type of Generalised Lüroth Series. The first
example of such an expansion was given by Lüroth. The map Lα is the jump transformation
of the corresponding α-Farey map Fα. The maps Lα and Fα share the same relationship as the
classical Farey and Gauss maps which give rise to the continued fraction expansion of a real
number. We also consider the topological properties of Fα and some Diophantine-type sets of
numbers expressed in terms of the α-Lüroth expansion.
Next we investigate certain ergodic-theoretic properties of the maps Lα and Fα. It will turn
out that the Lebesgue measure λ is invariant for every map Lα and that there exists a unique
Lebesgue-absolutely continuous invariant measure for Fα. We will give a precise expression for
the density of this measure. Our main result is that both Lα and Fα are exact, and thus ergodic.
The interest in the invariant measure for Fα lies in the fact that under a particular condition on
the underlying partition α, the invariant measure associated to the map Fα is infinite.
Then we proceed to introduce and examine the sequence of α-sum-level sets arising from
the α-Lüroth map, for an arbitrary given partition α. These sets can be written dynamically in
terms of Fα. The main result concerning the α-sum-level sets is to establish weak and strong
renewal laws. Note that for the Farey map and the Gauss map, the analogue of this result has
been obtained by Kesseböhmer and Stratmann. There the results were derived by using advanced
infinite ergodic theory, rather than the strong renewal theorems employed here. This underlines
the fact that one of the main ingredients of infinite ergodic theory is provided by some delicate
estimates in renewal theory.
Our final main result concerning the α-Lüroth and α-Farey systems is to provide a fractal-geometric
description of the Lyapunov spectra associated with each of the maps Lα and Fα.
The Lyapunov spectra for the Farey map and the Gauss map have been investigated in detail by
Kesseböhmer and Stratmann. The Farey map and the Gauss map are non-linear, whereas the
systems we consider are always piecewise linear. However, since our analysis is based on a large
family of different partitions of U , the class of maps which we consider in this paper allows us
to detect a variety of interesting new phenomena, including that of phase transitions.
Finally, we come to the conformal systems of the title. These are the limit sets of discrete
subgroups of the group of isometries of the hyperbolic plane. For these so-called Fuchsian
groups, our first main result is to establish the Hausdorff dimension of some Diophantine-type
sets contained in the limit set that are similar to those considered for the maps Lα. These sets
are then used in our second main result to analyse the more geometrically defined strict-Jarník
limit set of a Fuchsian group. Finally, we obtain a “weak multifractal spectrum” for the Patterson
measure associated to the Fuchsian group.Workshop on new developments in cetacean survey methods
http://hdl.handle.net/10023/3216
This report contains the slides from a workshop on New Developments in Cetacean Survey Methods held on 27th November 2011 at the 19th Biennial Conference on the Biology of Marine Mammals, Tampa, Florida. Review talks were given on Passive Acoustic Density Estimation (Len Thomas); Dealing with g(0)<1: Perception Bias (Stephen Buckland); Dealing with g(0)<1: Availability Bias (Hans Skaug); Dealing with Measurement Error (David Borchers); and Density Surface Modelling (Jay Barlow). The sessions were followed by a discussion, and this is summarized at the end of the report.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/32162011-01-01T00:00:00ZBorchers, David LouisThomas, LenBuckland, Stephen TerrenceSkaug, HansBarlow, JayThis report contains the slides from a workshop on New Developments in Cetacean Survey Methods held on 27th November 2011 at the 19th Biennial Conference on the Biology of Marine Mammals, Tampa, Florida. Review talks were given on Passive Acoustic Density Estimation (Len Thomas); Dealing with g(0)<1: Perception Bias (Stephen Buckland); Dealing with g(0)<1: Availability Bias (Hans Skaug); Dealing with Measurement Error (David Borchers); and Density Surface Modelling (Jay Barlow). The sessions were followed by a discussion, and this is summarized at the end of the report.Spatial patterns and species coexistence : using spatial statistics to identify underlying ecological processes in plant communities
http://hdl.handle.net/10023/3084
The use of spatial statistics to investigate ecological processes in plant communities is becoming increasingly widespread. In diverse communities such as tropical rainforests, analysis of spatial structure may help to unravel the various processes that act and interact to maintain high levels of diversity. In particular, a number of contrasting mechanisms have been suggested to explain species coexistence, and these differ greatly in their practical implications for the ecology and conservation of tropical forests. Traditional first-order measures of community structure have proved unable to distinguish these mechanisms in practice, but statistics that describe spatial structure may be able to do so. This is of great interest and relevance as spatially explicit data become available for a range of ecological communities and analysis methods for these data become more accessible.
This thesis investigates the potential for inference about underlying ecological processes in plant communities using spatial statistics. Current methodologies for spatial analysis are reviewed and extended, and are used to characterise the spatial signals of the principal theorised mechanisms of coexistence. The sensitivity of a range of spatial statistics to these signals is assessed, and the strength of such signals in natural communities is investigated.
The spatial signals of the processes considered here are found to be strong and robust to modelled stochastic variation. Several new and existing spatial statistics are found to be sensitive to these signals, and offer great promise for inference about underlying processes from empirical data. The relative strengths of particular processes are found to vary between natural communities, with any one theory being insufficient to explain observed patterns. This thesis extends both understanding of species coexistence in diverse plant communities and the methodology for assessing underlying process in particular cases. It demonstrates that the potential of spatial statistics in ecology is great and largely unexplored.
Thu, 01 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/30842012-11-01T00:00:00ZBrown, CalumThe use of spatial statistics to investigate ecological processes in plant communities is becoming increasingly widespread. In diverse communities such as tropical rainforests, analysis of spatial structure may help to unravel the various processes that act and interact to maintain high levels of diversity. In particular, a number of contrasting mechanisms have been suggested to explain species coexistence, and these differ greatly in their practical implications for the ecology and conservation of tropical forests. Traditional first-order measures of community structure have proved unable to distinguish these mechanisms in practice, but statistics that describe spatial structure may be able to do so. This is of great interest and relevance as spatially explicit data become available for a range of ecological communities and analysis methods for these data become more accessible.
This thesis investigates the potential for inference about underlying ecological processes in plant communities using spatial statistics. Current methodologies for spatial analysis are reviewed and extended, and are used to characterise the spatial signals of the principal theorised mechanisms of coexistence. The sensitivity of a range of spatial statistics to these signals is assessed, and the strength of such signals in natural communities is investigated.
The spatial signals of the processes considered here are found to be strong and robust to modelled stochastic variation. Several new and existing spatial statistics are found to be sensitive to these signals, and offer great promise for inference about underlying processes from empirical data. The relative strengths of particular processes are found to vary between natural communities, with any one theory being insufficient to explain observed patterns. This thesis extends both understanding of species coexistence in diverse plant communities and the methodology for assessing underlying process in particular cases. It demonstrates that the potential of spatial statistics in ecology is great and largely unexplored.Vessel noise affects beaked whale behavior : Results of a dedicated acoustic response study
http://hdl.handle.net/10023/3078
Some beaked whale species are susceptible to the detrimental effects of anthropogenic noise. Most studies have concentrated on the effects of military sonar, but other forms of acoustic disturbance (e.g. shipping noise) may disrupt behavior. An experiment involving the exposure of target whale groups to intense vessel-generated noise tested how these exposures influenced the foraging behavior of Blainville’s beaked whales (Mesoplodon densirostris) in the Tongue of the Ocean (Bahamas). A military array of bottom-mounted hydrophones was used to measure the response based upon changes in the spatial and temporal pattern of vocalizations. The archived acoustic data were used to compute metrics the echolocation-based foraging behavior for 16 targeted groups, 10 groups further away on the range, and 26 nonexposed groups. The duration of foraging bouts was not significantly affected by the exposure. Changes in the hydrophone over which the group was most frequently detected occurred as the animals moved around within a foraging bout, and their number was significantly less the closer the whales were to the sound source. Non-exposed groups also had significantly more changes in the primary hydrophone than exposed groups irrespective of distance. Our results suggested that broadband ship noise caused a significant change in beaked whale behavior up to at least 5.2 kilometers away from the vessel. The observed change could potentially correspond to a restriction in the movement of groups, a period of more directional travel, a reduction in the number of individuals clicking within the group, or a response to changes in prey movement.
Fri, 03 Aug 2012 00:00:00 GMThttp://hdl.handle.net/10023/30782012-08-03T00:00:00ZPirotta, EnricoMilor, RachelQuick, Nicola JaneMoretti, DavidDimarzio, NancyTyack, Peter LloydBoyd, IanHastie, Gordon DrummondSome beaked whale species are susceptible to the detrimental effects of anthropogenic noise. Most studies have concentrated on the effects of military sonar, but other forms of acoustic disturbance (e.g. shipping noise) may disrupt behavior. An experiment involving the exposure of target whale groups to intense vessel-generated noise tested how these exposures influenced the foraging behavior of Blainville’s beaked whales (Mesoplodon densirostris) in the Tongue of the Ocean (Bahamas). A military array of bottom-mounted hydrophones was used to measure the response based upon changes in the spatial and temporal pattern of vocalizations. The archived acoustic data were used to compute metrics the echolocation-based foraging behavior for 16 targeted groups, 10 groups further away on the range, and 26 nonexposed groups. The duration of foraging bouts was not significantly affected by the exposure. Changes in the hydrophone over which the group was most frequently detected occurred as the animals moved around within a foraging bout, and their number was significantly less the closer the whales were to the sound source. Non-exposed groups also had significantly more changes in the primary hydrophone than exposed groups irrespective of distance. Our results suggested that broadband ship noise caused a significant change in beaked whale behavior up to at least 5.2 kilometers away from the vessel. The observed change could potentially correspond to a restriction in the movement of groups, a period of more directional travel, a reduction in the number of individuals clicking within the group, or a response to changes in prey movement.Global analysis of cetacean line-transect surveys : detecting trends in cetacean density
http://hdl.handle.net/10023/2747
Measuring the effect of anthropogenic change on cetacean populations is hampered by our lack of understanding about population status and a lack of power in the available data to detect trends in abundance. Often long-term data from repeated surveys are lacking, and alternative approaches to trend detection must be considered. We utilised an existing database of line transect survey records to determine whether temporal trends could be detected when survey effort from around the world was combined. We extracted density estimates for 25 species and fitted generalised additive models (GAMs) to investigate whether taxonomic, spatial or methodological differences among systematic line-transect surveys affect estimates of density and whether we can identify temporal trends in the data once these factors are accounted for. The selected GAM consisted of 2 parts: an intercept term that was a complex interaction of taxonomic, spatial and methodological factors and a smooth temporal term with trends varying by family and ocean basin. We discuss the trends found and assess the suitability of published density estimates for detecting temporal trends using retrospective power analysis. In conclusion, increasing sample size through combining survey effort across a global scale does not necessarily result in sufficient power to detect trends because of the extent of variability across surveys, species and oceans. Instead, results from repeated dedicated surveys designed specifically for the species and geographical region of interest should be used to inform conservation and management.
Mon, 07 May 2012 00:00:00 GMThttp://hdl.handle.net/10023/27472012-05-07T00:00:00ZJewell, Rebecca LucyThomas, LenHarris, Catriona MKaschner, KristinWiff, Rodrigo AlexisHammond, Philip StevenQuick, Nicola JaneMeasuring the effect of anthropogenic change on cetacean populations is hampered by our lack of understanding about population status and a lack of power in the available data to detect trends in abundance. Often long-term data from repeated surveys are lacking, and alternative approaches to trend detection must be considered. We utilised an existing database of line transect survey records to determine whether temporal trends could be detected when survey effort from around the world was combined. We extracted density estimates for 25 species and fitted generalised additive models (GAMs) to investigate whether taxonomic, spatial or methodological differences among systematic line-transect surveys affect estimates of density and whether we can identify temporal trends in the data once these factors are accounted for. The selected GAM consisted of 2 parts: an intercept term that was a complex interaction of taxonomic, spatial and methodological factors and a smooth temporal term with trends varying by family and ocean basin. We discuss the trends found and assess the suitability of published density estimates for detecting temporal trends using retrospective power analysis. In conclusion, increasing sample size through combining survey effort across a global scale does not necessarily result in sufficient power to detect trends because of the extent of variability across surveys, species and oceans. Instead, results from repeated dedicated surveys designed specifically for the species and geographical region of interest should be used to inform conservation and management.A critical review of the literature on population modelling
http://hdl.handle.net/10023/2241
The 2005 report of the National Research Council’s ‘Committee on Characterizing Biologically Significant Marine Mammal Behavior’ proposed a framework, which they called PCAD - Population Consequences of Acoustic Disturbance, that uses a series of transfer functions to link behavioural responses to sound with life functions, vital rates, and population change. The Committee suggested that the best understood transfer functions are those linking vital rates to population change. One of the main aims of this report is to document that understanding. However, we also show how the existing frameworks for modelling the dynamics of marine mammal populations can be extended to include the effects of behavioural responses on vital rates. In Chapter 1 we introduce the central concept of the rate of increase (lambda) of a population, which we believe is the most useful measure of the effects of behavioural responses on the dynamics of a population. If the value of lambda exceeds one, then thepopulation will increase over time; if it is less than one it will decrease. We show how changes in lambda provide a measure of the impact of human activities (such as exploitation, conservation, or disturbance) on a population. We also introduce structured population models, which take account of the fact that all individuals in a population are not identical, and show how the dynamics of different parts of a population can be modelled using a population projection matrix. The mathematical properties of this projection matrix can be used to determine the sensitivity of lambda to small changes in vital rates. Finally, we provide a very brief introduction to the concept of stochasticity, and the use of lambda to predict when (and if) a population might be driven to extinction. Chapter 2 describes how lambda also provides a measure of the Darwinian fitness of the individual members of a population. An individual’s fitness, the contribution it will make to future generations, depends to a large extent on its body condition and on the risks of mortality to which it is exposed. Both of these could be affected by behaviour responses to sound. We also explain current theories about the relationship between an individual’s feeding behaviour and the abundance and distribution of prey, and how this can affect body condition. Chapter 3 provides a more detailed description of how elasticity analysis can be used to investigate the impact of changes in vital rates on lambda . Elasticity analysis is a useful tool for detecting which vital rates are most important in determining the dynamics of a population. However, its value is limited because it does not take account of random variations (stochasticity) and, in theory, it can only predict the effect of small changes in vital rates. Chapter 4 describes the fundamental concept of density dependence: the way in which vital rates change with population size or the availability of resources, such as prey. Not only is density dependence an essential prerequisite for population stability and sustainable use, but the form it takes will also determine how a population responds to behavioural changes. This is because behaviour, and particularly the effect of behavioural change on body condition, plays a central role in many of the mechanistic models of density dependence. Chapters 5 and 6 explore the way in which additional complexities, such as social structure and the way in which populations are distributed in space, can affect the dynamics of populations. Models that account for these complexities behave in a much less predictable way than the relatively simple structured models that form the core of Chapters 1-4. So far, the models of population dynamics that we have reviewed have been deterministic. That is, they have assumed that the only way in which vital rates can vary is in response to a change in abundance, via density dependent mechanisms. In Chapters 7 and 8 we investigate the effect of random variation (stochasticity) on population dynamics. We distinguish the effects of demographic stochasticity, chance variations in the number of animals that die or give birth in a time interval that occur even if vital rates do not vary over time, and environmental stochasticity, which is the result of variations in vital rates across years. Variation in abundance may also occur as a result of environmental change and changes in the ecological community of which a population is a part. The effect of all these sources of variation is to reduce the realised growth rate of a population, and therefore its risk of extinction. In Chapter 9 we consider how the basic population modelling framework described in Chapters 1-8 might be extended to take account of the life functions identified by the NRC Committee. We suggest that these life functions are useful for defining the context in which behavioural responses might affect vital rates, but that they do not need to be modelled explicitly. Removing vital functions from the PCAD framework results in a much simpler structure, which is compatible with existing population modelling frameworks. However, these will have to be extended to allow population states, like body condition, that vary continuously to be modelled. Chapter 10 describes how changes in lambda can be detected. The simple analytical frameworks that are available for this are all vulnerable to the effects of variability that we introduced in Chapter 7. However, there is a framework (state-space and hidden Markov process modelling) that can account for the effects of this variability, and we recommend its use for detecting trends. The additional benefit of this approach is that its use results in a detailed model of the dynamics of the population that is under investigation. Chapter 11 reviews the different model structures that can be used to describe the dynamics of a population, and explains when different forms of population models (e.g. discrete vs. continuous time, deterministic vs. stochastic) are most appropriate. We also discuss how these different frameworks can be extended to account for continuous population states, as recommended in Chapter 8. The final focus is on how state-space models can be fitted to time series of abundance estimates and information on vital rates. Chapter 12 looks at the relevance of the different modelling approaches described in the previous chapters for analysing the potential effects of behavioural responses to sound on population dynamics, particularly the kinds of sounds that may be generated by the oil and gas industry. We conclude that lambda , the population rate of increase, and its variation provides a useful measure of these effects. We also believe that the models used for this purpose will certainly have to account for the effects of variability and density dependence. They will probably also have to account for the effects of social structure and the way in which populations use space. The state-space modelling framework outlined in Chapter 11 can, in principle, be extended to capture all of these features although work on this is still in its infancy.
Final Report to the Joint Industry Project of the International Association of Oil & Gas Producers on contract JIP22 07_20
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/10023/22412009-01-01T00:00:00ZCabrelli, AbigailHarwood, JohnMatthiopoulos, JasonNew, Leslie FrancesThomas, LenThe 2005 report of the National Research Council’s ‘Committee on Characterizing Biologically Significant Marine Mammal Behavior’ proposed a framework, which they called PCAD - Population Consequences of Acoustic Disturbance, that uses a series of transfer functions to link behavioural responses to sound with life functions, vital rates, and population change. The Committee suggested that the best understood transfer functions are those linking vital rates to population change. One of the main aims of this report is to document that understanding. However, we also show how the existing frameworks for modelling the dynamics of marine mammal populations can be extended to include the effects of behavioural responses on vital rates. In Chapter 1 we introduce the central concept of the rate of increase (lambda) of a population, which we believe is the most useful measure of the effects of behavioural responses on the dynamics of a population. If the value of lambda exceeds one, then thepopulation will increase over time; if it is less than one it will decrease. We show how changes in lambda provide a measure of the impact of human activities (such as exploitation, conservation, or disturbance) on a population. We also introduce structured population models, which take account of the fact that all individuals in a population are not identical, and show how the dynamics of different parts of a population can be modelled using a population projection matrix. The mathematical properties of this projection matrix can be used to determine the sensitivity of lambda to small changes in vital rates. Finally, we provide a very brief introduction to the concept of stochasticity, and the use of lambda to predict when (and if) a population might be driven to extinction. Chapter 2 describes how lambda also provides a measure of the Darwinian fitness of the individual members of a population. An individual’s fitness, the contribution it will make to future generations, depends to a large extent on its body condition and on the risks of mortality to which it is exposed. Both of these could be affected by behaviour responses to sound. We also explain current theories about the relationship between an individual’s feeding behaviour and the abundance and distribution of prey, and how this can affect body condition. Chapter 3 provides a more detailed description of how elasticity analysis can be used to investigate the impact of changes in vital rates on lambda . Elasticity analysis is a useful tool for detecting which vital rates are most important in determining the dynamics of a population. However, its value is limited because it does not take account of random variations (stochasticity) and, in theory, it can only predict the effect of small changes in vital rates. Chapter 4 describes the fundamental concept of density dependence: the way in which vital rates change with population size or the availability of resources, such as prey. Not only is density dependence an essential prerequisite for population stability and sustainable use, but the form it takes will also determine how a population responds to behavioural changes. This is because behaviour, and particularly the effect of behavioural change on body condition, plays a central role in many of the mechanistic models of density dependence. Chapters 5 and 6 explore the way in which additional complexities, such as social structure and the way in which populations are distributed in space, can affect the dynamics of populations. Models that account for these complexities behave in a much less predictable way than the relatively simple structured models that form the core of Chapters 1-4. So far, the models of population dynamics that we have reviewed have been deterministic. That is, they have assumed that the only way in which vital rates can vary is in response to a change in abundance, via density dependent mechanisms. In Chapters 7 and 8 we investigate the effect of random variation (stochasticity) on population dynamics. We distinguish the effects of demographic stochasticity, chance variations in the number of animals that die or give birth in a time interval that occur even if vital rates do not vary over time, and environmental stochasticity, which is the result of variations in vital rates across years. Variation in abundance may also occur as a result of environmental change and changes in the ecological community of which a population is a part. The effect of all these sources of variation is to reduce the realised growth rate of a population, and therefore its risk of extinction. In Chapter 9 we consider how the basic population modelling framework described in Chapters 1-8 might be extended to take account of the life functions identified by the NRC Committee. We suggest that these life functions are useful for defining the context in which behavioural responses might affect vital rates, but that they do not need to be modelled explicitly. Removing vital functions from the PCAD framework results in a much simpler structure, which is compatible with existing population modelling frameworks. However, these will have to be extended to allow population states, like body condition, that vary continuously to be modelled. Chapter 10 describes how changes in lambda can be detected. The simple analytical frameworks that are available for this are all vulnerable to the effects of variability that we introduced in Chapter 7. However, there is a framework (state-space and hidden Markov process modelling) that can account for the effects of this variability, and we recommend its use for detecting trends. The additional benefit of this approach is that its use results in a detailed model of the dynamics of the population that is under investigation. Chapter 11 reviews the different model structures that can be used to describe the dynamics of a population, and explains when different forms of population models (e.g. discrete vs. continuous time, deterministic vs. stochastic) are most appropriate. We also discuss how these different frameworks can be extended to account for continuous population states, as recommended in Chapter 8. The final focus is on how state-space models can be fitted to time series of abundance estimates and information on vital rates. Chapter 12 looks at the relevance of the different modelling approaches described in the previous chapters for analysing the potential effects of behavioural responses to sound on population dynamics, particularly the kinds of sounds that may be generated by the oil and gas industry. We conclude that lambda , the population rate of increase, and its variation provides a useful measure of these effects. We also believe that the models used for this purpose will certainly have to account for the effects of variability and density dependence. They will probably also have to account for the effects of social structure and the way in which populations use space. The state-space modelling framework outlined in Chapter 11 can, in principle, be extended to capture all of these features although work on this is still in its infancy.An update to the methods in Endangered Species Research 2011 paper "Estimating North Pacific right whale Eubalaena japonica density using passive acoustic cue counting"
http://hdl.handle.net/10023/2158
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/21582012-01-01T00:00:00ZMarques, Tiago A.Munger, LisaThomas, LenWiggins, SeanHildebrand, JohnEstimating abundance of rare, small mammals: A case study of the Key Largo woodrat (Neotoma floridana smalli)
http://hdl.handle.net/10023/2068
Estimates of animal abundance or density are fundamental quantities in ecology and conservation, but for many species such as rare, small mammals, obtaining robust estimates is problematic. In this thesis, I combine elements of two standard abundance estimation methods, capture-recapture and distance sampling, to develop a method called trapping point transects (TPT). In TPT, a "detection function", g(r) (i.e. the probability of capturing an animal, given it is r m from a trap when the trap is set) is estimated using a subset of animals whose locations are known prior to traps being set. Generalised linear models are used to estimate the detection function, and the model can be extended to include random effects to allow for heterogeneity in capture probabilities. Standard point transect methods are modified to estimate abundance. Two abundance estimators are available. The first estimator is based on the reciprocal of the expected probability of detecting an animal, ^P, where the expectation is over r;
whereas the second estimator is the expectation of the reciprocal of ^P.
Performance of the TPT method under various sampling efforts and underlying true detection probabilities of individuals in the population was investigated in a simulation study. When underlying probability of detection was high (g(0) = 0:88) and between-individual variation was small, survey effort could be surprisingly low (c. 510 trap nights) to yield low bias (c. 4%) in the two estimators;
but under certain situations, the second estimator can be extremely biased. Uncertainty and relative bias in population estimates increased with decreasing detectability and increasing between-individual variation.
Abundance of the Key Largo woodrat (Neotoma floridana smalli), an endangered rodent with a restricted geographic range, was estimated using TPT. The TPT method compared well to other viable methods (capture-recapture and spatially-explicit capture-recapture), in terms of both field practicality and cost. The TPT method may generally be useful in estimating animal abundance in trapping studies and variants of the TPT method are presented.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/20682011-01-01T00:00:00ZPotts, Joanne M.Estimates of animal abundance or density are fundamental quantities in ecology and conservation, but for many species such as rare, small mammals, obtaining robust estimates is problematic. In this thesis, I combine elements of two standard abundance estimation methods, capture-recapture and distance sampling, to develop a method called trapping point transects (TPT). In TPT, a "detection function", g(r) (i.e. the probability of capturing an animal, given it is r m from a trap when the trap is set) is estimated using a subset of animals whose locations are known prior to traps being set. Generalised linear models are used to estimate the detection function, and the model can be extended to include random effects to allow for heterogeneity in capture probabilities. Standard point transect methods are modified to estimate abundance. Two abundance estimators are available. The first estimator is based on the reciprocal of the expected probability of detecting an animal, ^P, where the expectation is over r;
whereas the second estimator is the expectation of the reciprocal of ^P.
Performance of the TPT method under various sampling efforts and underlying true detection probabilities of individuals in the population was investigated in a simulation study. When underlying probability of detection was high (g(0) = 0:88) and between-individual variation was small, survey effort could be surprisingly low (c. 510 trap nights) to yield low bias (c. 4%) in the two estimators;
but under certain situations, the second estimator can be extremely biased. Uncertainty and relative bias in population estimates increased with decreasing detectability and increasing between-individual variation.
Abundance of the Key Largo woodrat (Neotoma floridana smalli), an endangered rodent with a restricted geographic range, was estimated using TPT. The TPT method compared well to other viable methods (capture-recapture and spatially-explicit capture-recapture), in terms of both field practicality and cost. The TPT method may generally be useful in estimating animal abundance in trapping studies and variants of the TPT method are presented.Complex Region Spatial Smoother (CReSS)
http://hdl.handle.net/10023/2048
Conventional smoothing over complicated coastal and island regions may result in errors across boundaries, due to the use of Euclidean distances to represent inter-point similarity. The new Complex Region Spatial Smoother (CReSS) method presented here, uses estimated geodesic distances, model averaging and a local radial basis function to provide improved smoothing over complex domains. CReSS is compared, via simulation, to recent related smoothing techniques, Thin Plate Splines (TPS, Harder and Desmarais, 1972), geodesic low rank TPS [Wang and Ranalli, 2007] and the Soap film smoother [Wood et al., 2008]. The GLTPS method cannot be used in areas with islands and SOAP can be hard to parameterize. CReSS is comparable with, if not better than, all considered methods on a range of simulations. Supplementary materials for this article are available online.
This work is supported with funding from NERC UK
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/20482011-01-01T00:00:00ZScott Hayward, Lindesay Alexandra SarahMacKenzie, Monique LeaDonovan, Carl RobertWalker, CameronAshe, ErinConventional smoothing over complicated coastal and island regions may result in errors across boundaries, due to the use of Euclidean distances to represent inter-point similarity. The new Complex Region Spatial Smoother (CReSS) method presented here, uses estimated geodesic distances, model averaging and a local radial basis function to provide improved smoothing over complex domains. CReSS is compared, via simulation, to recent related smoothing techniques, Thin Plate Splines (TPS, Harder and Desmarais, 1972), geodesic low rank TPS [Wang and Ranalli, 2007] and the Soap film smoother [Wood et al., 2008]. The GLTPS method cannot be used in areas with islands and SOAP can be hard to parameterize. CReSS is comparable with, if not better than, all considered methods on a range of simulations. Supplementary materials for this article are available online.Bayesian modelling of integrated data and its application to seabird populations
http://hdl.handle.net/10023/1635
Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework.
This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period.
Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.
Tue, 30 Nov 2010 00:00:00 GMThttp://hdl.handle.net/10023/16352010-11-30T00:00:00ZReynolds, Toby J.Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework.
This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period.
Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.Statistical models for the long-term monitoring of songbird populations: a Bayesian analysis of constant effort sites and ring-recovery data
http://hdl.handle.net/10023/885
To underpin and improve advice given to government and other interested parties
on the state of Britain’s common songbird populations, new models for
analysing ecological data are developed in this thesis. These models use data
from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme,
an annual bird-ringing programme in which catch effort is standardised. Data
from the CES scheme are routinely used to index abundance and productivity,
and to a lesser extent estimate adult survival rates. However, two features of
the CES data that complicate analysis were previously inadequately addressed,
namely the presence in the catch of “transient” birds not associated with the
local population, and the sporadic failure in the constancy of effort assumption
arising from the absence of within-year catch data. The current methodology
is extended, with efficient Bayesian models developed for each of these demographic
parameters that account for both of these data nuances, and from which
reliable and usefully precise estimates are obtained.
Of increasing interest is the relationship between abundance and the underlying
vital rates, an understanding of which facilitates effective conservation.
CES data are particularly amenable to an integrated approach to population
modelling, providing a combination of demographic information from a single
source. Such an integrated approach is developed here, employing Bayesian
methodology and a simple population model to unite abundance, productivity
and survival within a consistent framework. Independent data from ring-recoveries
provide additional information on adult and juvenile survival rates.
Specific advantages of this new integrated approach are identified, among which
is the ability to determine juvenile survival accurately, disentangle the probabilities
of survival and permanent emigration, and to obtain estimates of total
seasonal productivity.
The methodologies developed in this thesis are applied to CES data from Sedge
Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.
Fri, 25 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8852010-06-25T00:00:00ZCave, Vanessa M.To underpin and improve advice given to government and other interested parties
on the state of Britain’s common songbird populations, new models for
analysing ecological data are developed in this thesis. These models use data
from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme,
an annual bird-ringing programme in which catch effort is standardised. Data
from the CES scheme are routinely used to index abundance and productivity,
and to a lesser extent estimate adult survival rates. However, two features of
the CES data that complicate analysis were previously inadequately addressed,
namely the presence in the catch of “transient” birds not associated with the
local population, and the sporadic failure in the constancy of effort assumption
arising from the absence of within-year catch data. The current methodology
is extended, with efficient Bayesian models developed for each of these demographic
parameters that account for both of these data nuances, and from which
reliable and usefully precise estimates are obtained.
Of increasing interest is the relationship between abundance and the underlying
vital rates, an understanding of which facilitates effective conservation.
CES data are particularly amenable to an integrated approach to population
modelling, providing a combination of demographic information from a single
source. Such an integrated approach is developed here, employing Bayesian
methodology and a simple population model to unite abundance, productivity
and survival within a consistent framework. Independent data from ring-recoveries
provide additional information on adult and juvenile survival rates.
Specific advantages of this new integrated approach are identified, among which
is the ability to determine juvenile survival accurately, disentangle the probabilities
of survival and permanent emigration, and to obtain estimates of total
seasonal productivity.
The methodologies developed in this thesis are applied to CES data from Sedge
Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.Topics in estimation of quantum channels
http://hdl.handle.net/10023/869
A quantum channel is a mapping which sends density matrices to density
matrices. The estimation of quantum channels is of great importance to the
field of quantum information. In this thesis two topics related to estimation
of quantum channels are investigated. The first of these is the upper
bound of Sarovar and Milburn (2006) on the Fisher information obtainable
by measuring the output of a channel. Two questions raised by Sarovar and
Milburn about their bound are answered. A Riemannian metric on the space
of quantum states is introduced, related to the construction of the Sarovar
and Milburn bound. Its properties are characterized.
The second topic investigated is the estimation of unitary channels. The
situation is considered in which an experimenter has several non-identical
unitary channels that have the same parameter. It is shown that it is possible
to improve estimation using the channels together, analogous to the case of
identical unitary channels. Also, a new method of phase estimation is given
based on a method sketched by Kitaev (1996). Unlike other phase estimation
procedures which perform similarly, this procedure requires only very basic
experimental resources.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8692010-06-23T00:00:00ZO'Loan, Caleb J.A quantum channel is a mapping which sends density matrices to density
matrices. The estimation of quantum channels is of great importance to the
field of quantum information. In this thesis two topics related to estimation
of quantum channels are investigated. The first of these is the upper
bound of Sarovar and Milburn (2006) on the Fisher information obtainable
by measuring the output of a channel. Two questions raised by Sarovar and
Milburn about their bound are answered. A Riemannian metric on the space
of quantum states is introduced, related to the construction of the Sarovar
and Milburn bound. Its properties are characterized.
The second topic investigated is the estimation of unitary channels. The
situation is considered in which an experimenter has several non-identical
unitary channels that have the same parameter. It is shown that it is possible
to improve estimation using the channels together, analogous to the case of
identical unitary channels. Also, a new method of phase estimation is given
based on a method sketched by Kitaev (1996). Unlike other phase estimation
procedures which perform similarly, this procedure requires only very basic
experimental resources.Multi-species state-space modelling of the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) in Scotland
http://hdl.handle.net/10023/864
State-space modelling is a powerful tool to study ecological systems. The direct inclusion of uncertainty, unification of models and data, and ability to model unobserved, hidden states increases our knowledge about the environment and provides
new ecological insights. I extend the state-space framework to create multi-species
models, showing that the ability to model ecosystem interactions is limited only by data availability. State-space models are fit using both Bayesian and Frequentist methods, making them independent of a statistical school of thought. Bayesian approaches can have the advantage in their ability to account for missing data and fit hierarchical structures
and models with many parameters to limited data; often the case in ecological studies.
I have taken a Bayesian model fitting approach in this thesis.
The predator-prey interactions between the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) are used to demonstrate state-space modelling’s
capabilities. The harrier data are believed to be known without error, while missing
data make the cyclic dynamics of the grouse harder to model. The grouse-harrier interactions are modelled in a multi-species state-space model, rather than including
one species as a covariate in the other’s model. Finally, models are included for the
harriers’ alternate prey.
The single- and multi-species state-space models for the predator-prey interactions
provide insight into the species’ management. The models investigate aspects of the species’ behaviour, from the mechanisms behind grouse cycles to what motivates harrier immigration. The inferences drawn from these models are applicable to management, suggesting actions to halt grouse cycles or mitigate the grouse-harrier conflict. Overall, the multi-species models suggest that two popular ideas for grouse-harrier management, diversionary feeding and habitat manipulation to reduce alternate prey densities, will not have the desired effect, and in the case of reducing prey densities, may even increase the harriers’ impact on grouse chicks.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8642010-06-23T00:00:00ZNew, Leslie FrancesState-space modelling is a powerful tool to study ecological systems. The direct inclusion of uncertainty, unification of models and data, and ability to model unobserved, hidden states increases our knowledge about the environment and provides
new ecological insights. I extend the state-space framework to create multi-species
models, showing that the ability to model ecosystem interactions is limited only by data availability. State-space models are fit using both Bayesian and Frequentist methods, making them independent of a statistical school of thought. Bayesian approaches can have the advantage in their ability to account for missing data and fit hierarchical structures
and models with many parameters to limited data; often the case in ecological studies.
I have taken a Bayesian model fitting approach in this thesis.
The predator-prey interactions between the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) are used to demonstrate state-space modelling’s
capabilities. The harrier data are believed to be known without error, while missing
data make the cyclic dynamics of the grouse harder to model. The grouse-harrier interactions are modelled in a multi-species state-space model, rather than including
one species as a covariate in the other’s model. Finally, models are included for the
harriers’ alternate prey.
The single- and multi-species state-space models for the predator-prey interactions
provide insight into the species’ management. The models investigate aspects of the species’ behaviour, from the mechanisms behind grouse cycles to what motivates harrier immigration. The inferences drawn from these models are applicable to management, suggesting actions to halt grouse cycles or mitigate the grouse-harrier conflict. Overall, the multi-species models suggest that two popular ideas for grouse-harrier management, diversionary feeding and habitat manipulation to reduce alternate prey densities, will not have the desired effect, and in the case of reducing prey densities, may even increase the harriers’ impact on grouse chicks.Distance software: design and analysis of distance sampling surveys for estimating population size
http://hdl.handle.net/10023/817
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial pre-requisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: CDS (conventional distance sampling), which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; MCDS (multiple covariate distance sampling), which allows covariates in addition to distance; and MRDS (mark-recapture distance sampling), which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the DSM (density surface modelling) analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/8172010-01-01T00:00:00ZThomas, LenBuckland, Stephen TerrenceRexstad, EricLaake, J LStrindberg, SHedley, S LBishop, J R BMarques, Tiago A.1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial pre-requisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: CDS (conventional distance sampling), which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; MCDS (multiple covariate distance sampling), which allows covariates in addition to distance; and MRDS (mark-recapture distance sampling), which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the DSM (density surface modelling) analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.Embedding population dynamics in mark-recapture models
http://hdl.handle.net/10023/718
Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states.
Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods.
Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period.
An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods.
Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.
Wed, 24 Jun 2009 00:00:00 GMThttp://hdl.handle.net/10023/7182009-06-24T00:00:00ZBishop, Jonathan R. B.Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states.
Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods.
Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period.
An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods.
Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.The importance of analysis method for breeding bird survey population trend estimates
http://hdl.handle.net/10023/685
Population trends from the Breeding Bird Survey are widely used to focus conservation efforts on species thought to be in decline and to test preliminary hypotheses regarding the causes of these declines. A number of statistical methods have been used to estimate population trends, but there is no consensus us to which is the most reliable. We quantified differences in trend estimates or different analysis methods applied to the same subset of Breeding Bird Survey data. We estimated trends for 115 species in British Columbia using three analysis methods: U.S. National Biological Service route regression, Canadian Wildlife Service route regression, and nonparametric rank-trends analysis. Overall, the number of species estimated to be declining was similar among the three methods, but the number of statistically significant declines was not similar (15, 8, and 29 respectively). In addition, many differences existed among methods in the trend estimates assigned to individual species. Comparing the two route regression methods, Canadian Wildlife Service estimates had a greater absolute magnitude on average than those of the U.S. National Biological Service method. U.S. National Biological Service estimates were on average more positive than the Canadian Wildlife Service estimates when the respective agency's data selection criteria were applied separately. These results imply that our ability to detect population declines and to prioritize species of conservation concern depend strongly upon the analysis method used. This highlights the need for further research to determine how best to accurately estimate trends from the data. We suggest a method for evaluating the performance of the analysis methods by using simulated Breeding Bird Survey data.
Mon, 01 Jan 1996 00:00:00 GMThttp://hdl.handle.net/10023/6851996-01-01T00:00:00ZThomas, LenMartin, KathyPopulation trends from the Breeding Bird Survey are widely used to focus conservation efforts on species thought to be in decline and to test preliminary hypotheses regarding the causes of these declines. A number of statistical methods have been used to estimate population trends, but there is no consensus us to which is the most reliable. We quantified differences in trend estimates or different analysis methods applied to the same subset of Breeding Bird Survey data. We estimated trends for 115 species in British Columbia using three analysis methods: U.S. National Biological Service route regression, Canadian Wildlife Service route regression, and nonparametric rank-trends analysis. Overall, the number of species estimated to be declining was similar among the three methods, but the number of statistically significant declines was not similar (15, 8, and 29 respectively). In addition, many differences existed among methods in the trend estimates assigned to individual species. Comparing the two route regression methods, Canadian Wildlife Service estimates had a greater absolute magnitude on average than those of the U.S. National Biological Service method. U.S. National Biological Service estimates were on average more positive than the Canadian Wildlife Service estimates when the respective agency's data selection criteria were applied separately. These results imply that our ability to detect population declines and to prioritize species of conservation concern depend strongly upon the analysis method used. This highlights the need for further research to determine how best to accurately estimate trends from the data. We suggest a method for evaluating the performance of the analysis methods by using simulated Breeding Bird Survey data.Retrospective power analysis
http://hdl.handle.net/10023/679
Many papers have appeared in the recent biological literature encouraging us to incorporate statistical power analysis into our hypothesis testing protocol (Peterman 1990; Fairweather 1991; Muller & Benignus 1992; Taylor & Gerrodette 1993; Searcy-Bernal 1994; Thomas & Juanes 1996). The importance of doing a power analysis before beginning a study (prospective power analysis) is universally accepted: such analyses help us to decide how many samples are required to have a good chance of getting unambiguous results. In contrast, the role of power analysis after the data are collected and analyzed (retrospective power analysis) is controversial, as is evidenced by the papers of Reed and Blaustein (1995) and Hayes and Steidl (1997). The controversy is over the use of information from the sample data in retrospective power calculations. As I will show, the type of information used has fundamental implications for the value of such analyses. I compare the approaches to calculating retrospective power, noting the strengths and weaknesses of each, and make general recommendations as to how and when retrospective power analyses should be conducted.
The pdf contains the article; the ASCII file contains SAS code to calculate power and confidence limits for simple linear regression, as detailed in the article appendix.
Wed, 01 Jan 1997 00:00:00 GMThttp://hdl.handle.net/10023/6791997-01-01T00:00:00ZThomas, LenMany papers have appeared in the recent biological literature encouraging us to incorporate statistical power analysis into our hypothesis testing protocol (Peterman 1990; Fairweather 1991; Muller & Benignus 1992; Taylor & Gerrodette 1993; Searcy-Bernal 1994; Thomas & Juanes 1996). The importance of doing a power analysis before beginning a study (prospective power analysis) is universally accepted: such analyses help us to decide how many samples are required to have a good chance of getting unambiguous results. In contrast, the role of power analysis after the data are collected and analyzed (retrospective power analysis) is controversial, as is evidenced by the papers of Reed and Blaustein (1995) and Hayes and Steidl (1997). The controversy is over the use of information from the sample data in retrospective power calculations. As I will show, the type of information used has fundamental implications for the value of such analyses. I compare the approaches to calculating retrospective power, noting the strengths and weaknesses of each, and make general recommendations as to how and when retrospective power analyses should be conducted.A unified framework for modelling wildlife population dynamics
http://hdl.handle.net/10023/678
This paper proposes a unified framework for defining and fitting stochastic, discrete-time, discrete-stage population dynamics models. The biological system is described by a state–space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British Grey Seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.
The pdf document contains the full article text; program code (in S-PLUS 6.1) for the example analysis is in the three text files; data is available from the Sea Mammal Research Unit (http://www.smru.st-and.ac.uk)
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10023/6782005-01-01T00:00:00ZThomas, LenBuckland, Stephen T.Newman, KBHarwood, JohnThis paper proposes a unified framework for defining and fitting stochastic, discrete-time, discrete-stage population dynamics models. The biological system is described by a state–space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British Grey Seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.WinBUGS for population ecologists: Bayesian modeling using Markov Chain Monte Carlo methods.
http://hdl.handle.net/10023/677
The computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden Württemberg, Germany, to demonstrate the use of mark-recapture models to estimate survival, and also how to cope with unexplained variance through random effects. Recent advances in methodology and also the WinBUGS software allow us to introduce (i) a flexible way of incorporating covariates using spline smoothing and (ii) a method to deal with missing values in covariates. The second example shows how to estimate population density while accounting for detectability, using distance sampling methods applied to a test dataset collected on a known population of wooden stakes. Finally, the third case study involves the use of state-space models of wildlife population dynamics to make inferences about density dependence in a North American duck species. Reversible Jump MCMC is used to calculate the probability of various candidate models. For all examples, data and WinBUGS code are provided.
This paper was presented at the EURING 2007 Technical Meeting, January 14-21, Dunedin, New Zealand. It has been submitted for publication in the conference proceedings, which will appear as a special issue of Environmental and Ecological Statistics.; The zip file contains accompanying code in WinBUGS
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/6772008-01-01T00:00:00ZGiminez, OBonner, S JKing, RuthParker, R ABrooks, S PJamieson, L EGrosbois, VMorgan, B J TThomas, LenThe computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden Württemberg, Germany, to demonstrate the use of mark-recapture models to estimate survival, and also how to cope with unexplained variance through random effects. Recent advances in methodology and also the WinBUGS software allow us to introduce (i) a flexible way of incorporating covariates using spline smoothing and (ii) a method to deal with missing values in covariates. The second example shows how to estimate population density while accounting for detectability, using distance sampling methods applied to a test dataset collected on a known population of wooden stakes. Finally, the third case study involves the use of state-space models of wildlife population dynamics to make inferences about density dependence in a North American duck species. Reversible Jump MCMC is used to calculate the probability of various candidate models. For all examples, data and WinBUGS code are provided.Density estimation and time trend analysis of large herbivores in Nagarhole, India
http://hdl.handle.net/10023/669
Density estimates for six large herbivore species were obtained through
analysis of line transect data from Nagarhole National Park, south-western India,
collected between 1989 and 2000. These species were Chital (Axis axis), Sambar
(Cervus unicolor), Gaur (Bos gaurus), Wild Pig (Sus scrofa), Muntjac (Muntiacus
muntjak) and Asian Elephant (Elephas maximus). Multiple Covariate Distance
Sampling (MCDS) models were used to derive these density estimates. The distance
histograms showed a relatively large spike at zero, which can lead to problems when
fitting MCDS models. The effects of this spike were investigated and remedied by
forward truncation. Density estimates from unmodified dataset were 10-15% higher
than estimates from the forward truncated data, with this going up to 37% for
Muntjac. These could possibly be over estimates. Empirical trend models were then
fit to the density estimates. Overall trends were stable, though there were intra-habitat
differences in trends for some species. The trends were similar both in cases where
forward truncation was done as well as in those where they were not.
MRes in Environmental Biology
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10023/6692005-01-01T00:00:00ZGangadharan, AdityaDensity estimates for six large herbivore species were obtained through
analysis of line transect data from Nagarhole National Park, south-western India,
collected between 1989 and 2000. These species were Chital (Axis axis), Sambar
(Cervus unicolor), Gaur (Bos gaurus), Wild Pig (Sus scrofa), Muntjac (Muntiacus
muntjak) and Asian Elephant (Elephas maximus). Multiple Covariate Distance
Sampling (MCDS) models were used to derive these density estimates. The distance
histograms showed a relatively large spike at zero, which can lead to problems when
fitting MCDS models. The effects of this spike were investigated and remedied by
forward truncation. Density estimates from unmodified dataset were 10-15% higher
than estimates from the forward truncated data, with this going up to 37% for
Muntjac. These could possibly be over estimates. Empirical trend models were then
fit to the density estimates. Overall trends were stable, though there were intra-habitat
differences in trends for some species. The trends were similar both in cases where
forward truncation was done as well as in those where they were not.Models of random wildlife movement with an application to distance sampling
http://hdl.handle.net/10023/668
In this paper we present three models of random wildlife movement: a one dimensional model of wildlife-observer encounters on roads, an analogous two dimensional model, and an further two-dimensional model that borrows from the ideas of statistical mechanics. We then derive unbiased estimates of wildlife density in terms of encounters for each of these models. By extending these results to incorporate uncertain detection, we suggest three novel distance sampling methods and briefly consider possible field applications.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10023/6682007-01-01T00:00:00ZDiTraglia, Francis J.In this paper we present three models of random wildlife movement: a one dimensional model of wildlife-observer encounters on roads, an analogous two dimensional model, and an further two-dimensional model that borrows from the ideas of statistical mechanics. We then derive unbiased estimates of wildlife density in terms of encounters for each of these models. By extending these results to incorporate uncertain detection, we suggest three novel distance sampling methods and briefly consider possible field applications.Designing a shipboard line transect survey to estimate cetacean abundance off the Azores Archipelago, Portugal
http://hdl.handle.net/10023/667
Management schemes dedicated to the conservation of wildlife populations rely on the effective monitoring of population size, and this requires the accurate and precise estimation of abundance. The accuracy and precision of estimates are determined to a large extent by the survey design. Line transect surveys are commonly applied to wildlife population assessments in which the primary purpose of a survey design is to ensure that the critical distance sampling assumptions are met.
Little information is available regarding cetacean abundance in the Archipelago of the Azores (Portugal). This study aims to design a line transect shipboard survey that allows the collection of data required to provide abundance estimates for such species. Several aspects must be taken into consideration when designing a survey to estimate cetacean abundance. This is an iterative process, and there is a constant trade off between the logistic constraints and the desired statistical robustness. Information on this process is provided to aid policy makers and environmental managers, such as the criteria used for the choices made when defining the elements of a survey design.
Three survey effort scenarios are provided to illustrate the range of possibilities between statistical robustness and logistic/ management restrictions. A survey is designed for the more economical scenario (L=5000Km), although the second scenario is the one recommended to be implemented (L=17,600Km) given it provides robust estimates of
abundance (CV<=0.2).
Revised version November 2008. MRes in Marine Mammal Science
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/6672008-01-01T00:00:00ZFaustino, Cláudia Estevinho SantosManagement schemes dedicated to the conservation of wildlife populations rely on the effective monitoring of population size, and this requires the accurate and precise estimation of abundance. The accuracy and precision of estimates are determined to a large extent by the survey design. Line transect surveys are commonly applied to wildlife population assessments in which the primary purpose of a survey design is to ensure that the critical distance sampling assumptions are met.
Little information is available regarding cetacean abundance in the Archipelago of the Azores (Portugal). This study aims to design a line transect shipboard survey that allows the collection of data required to provide abundance estimates for such species. Several aspects must be taken into consideration when designing a survey to estimate cetacean abundance. This is an iterative process, and there is a constant trade off between the logistic constraints and the desired statistical robustness. Information on this process is provided to aid policy makers and environmental managers, such as the criteria used for the choices made when defining the elements of a survey design.
Three survey effort scenarios are provided to illustrate the range of possibilities between statistical robustness and logistic/ management restrictions. A survey is designed for the more economical scenario (L=5000Km), although the second scenario is the one recommended to be implemented (L=17,600Km) given it provides robust estimates of
abundance (CV<=0.2).Behavioural changes of a long-ranging diver in response to oceanographic conditions
http://hdl.handle.net/10023/665
The development of an animal-borne instrument that can record oceanographic measurements (CTD-SRDL) has enabled the collection of oceanographic data at a scale relevant to the counterpart behavioural data, both in time and 3-dimensional space. This has advanced the potential for studies of the behaviour of deep-diving marine animals and the way in which they respond to their environment, yet the nature of the data delivered by CTD-SRDLs presents substantial analytical challenges and places constraints on its biological interpretation. Behavioural and environmental data, collected using CTD-SRDLs deployed on southern elephant seals (Mirounga leonina) from the South Georgia subpopulation in 2004 and 2005, are analysed for 13 females and 4 males (21,015 dives). Compressed dive profiles are used to classify individual dives into six distinct types based on their 2-dimensional time-depth characteristics using random forest classification. The relationship between dive type and environmental variables, derived from oceanographic data recorded on board the animals, is investigated in the context of regression analysis, employing a multinomial model, as well as independently fitted Generalized Linear Models (GLM) and Generalized Additive Models (GAM) for each dive type. Regression is not found to be an appropriate method for analysing abstracted behavioural dive data, and other methods are suggested. We show that functional specializations can be manifested within a dive type, using square bottom dives (SQ) as an example. The usefulness of dive classification is discussed in the context of behavioural interpretation, and validity of the ecological functions attached to each class. Preliminary analyses are important drivers of further research into improving the interpretability of abstracted behavioural data, and developing efficient, standardized methods for widespread application to this type of data, which is obtained in abundance via satellite telemetry.
BL 5019 Research project. MRes Environmental Biology
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10023/6652007-01-01T00:00:00ZPhotopoulos, TheoniThe development of an animal-borne instrument that can record oceanographic measurements (CTD-SRDL) has enabled the collection of oceanographic data at a scale relevant to the counterpart behavioural data, both in time and 3-dimensional space. This has advanced the potential for studies of the behaviour of deep-diving marine animals and the way in which they respond to their environment, yet the nature of the data delivered by CTD-SRDLs presents substantial analytical challenges and places constraints on its biological interpretation. Behavioural and environmental data, collected using CTD-SRDLs deployed on southern elephant seals (Mirounga leonina) from the South Georgia subpopulation in 2004 and 2005, are analysed for 13 females and 4 males (21,015 dives). Compressed dive profiles are used to classify individual dives into six distinct types based on their 2-dimensional time-depth characteristics using random forest classification. The relationship between dive type and environmental variables, derived from oceanographic data recorded on board the animals, is investigated in the context of regression analysis, employing a multinomial model, as well as independently fitted Generalized Linear Models (GLM) and Generalized Additive Models (GAM) for each dive type. Regression is not found to be an appropriate method for analysing abstracted behavioural dive data, and other methods are suggested. We show that functional specializations can be manifested within a dive type, using square bottom dives (SQ) as an example. The usefulness of dive classification is discussed in the context of behavioural interpretation, and validity of the ecological functions attached to each class. Preliminary analyses are important drivers of further research into improving the interpretability of abstracted behavioural data, and developing efficient, standardized methods for widespread application to this type of data, which is obtained in abundance via satellite telemetry.Using generalized estimating equations with regression splines to improve analysis of butterfly transect data
http://hdl.handle.net/10023/488
Surveying animal populations is an important aspect of wildlife
management. Distinguishing trend from random fluctuations and
quantifying trend are key goals in any analysis.
The aim of this thesis is to review analyses of Butterfly Monitoring
Survey (BMS) data and to develop new methods which address some
flaws in previous studies. The BMS was established in 1976 at Monks
Wood, Cambridgeshire and sites were added over time throughout
Britain in order to monitor butterfly population trends. Weekly
counts are made over the monitoring season and the main aims are to
produce annual indices and compare these indices over time for any
particular species.
Originally, weekly counts were summed to produce relative indices
and missing counts were estimated using linear interpolation. This
thesis discusses the weaknesses of this basic method
and suggests possible improvements.
In recent years, with advancements in statistical methods and
increased computer power, new methods can be applied to accommodate
the longitudinal and flexible nature of ecological data.
Mixed Models, Generalized Estimating Equations and Generalized
Additive Models are used and the relative merits of each modelling
approach discussed. These methods allow for correlation and
non-linearity in data.
Model selection is an important consideration when modelling and
different tests are introduced and compared.
Once a model is selected, site-level indices are estimated, which
can be collated to produce regional and national indices. Different
methods of estimating precision around indices are also contrasted.
Bootstrapping is found to be a convenient and dependable approach.
Abundance is difficult to disentangle from detectability when only
counts of species are carried out. Methods for dealing with this
problem are suggested.
Once reliable annual abundance estimates are found, they can be
compared over time using a variety of statistical techniques. The
chain-ratio method is applied to a subset of real data.
Sun, 01 Jun 2008 00:00:00 GMThttp://hdl.handle.net/10023/4882008-06-01T00:00:00ZBrewer, CiaraSurveying animal populations is an important aspect of wildlife
management. Distinguishing trend from random fluctuations and
quantifying trend are key goals in any analysis.
The aim of this thesis is to review analyses of Butterfly Monitoring
Survey (BMS) data and to develop new methods which address some
flaws in previous studies. The BMS was established in 1976 at Monks
Wood, Cambridgeshire and sites were added over time throughout
Britain in order to monitor butterfly population trends. Weekly
counts are made over the monitoring season and the main aims are to
produce annual indices and compare these indices over time for any
particular species.
Originally, weekly counts were summed to produce relative indices
and missing counts were estimated using linear interpolation. This
thesis discusses the weaknesses of this basic method
and suggests possible improvements.
In recent years, with advancements in statistical methods and
increased computer power, new methods can be applied to accommodate
the longitudinal and flexible nature of ecological data.
Mixed Models, Generalized Estimating Equations and Generalized
Additive Models are used and the relative merits of each modelling
approach discussed. These methods allow for correlation and
non-linearity in data.
Model selection is an important consideration when modelling and
different tests are introduced and compared.
Once a model is selected, site-level indices are estimated, which
can be collated to produce regional and national indices. Different
methods of estimating precision around indices are also contrasted.
Bootstrapping is found to be a convenient and dependable approach.
Abundance is difficult to disentangle from detectability when only
counts of species are carried out. Methods for dealing with this
problem are suggested.
Once reliable annual abundance estimates are found, they can be
compared over time using a variety of statistical techniques. The
chain-ratio method is applied to a subset of real data.Incorporating measurement error and density gradients in distance sampling surveys
http://hdl.handle.net/10023/391
Distance sampling is one of the most commonly used methods for estimating density
and abundance. Conventional methods are based on the distances of detected animals
from the center of point transects or the center line of line transects. These distances
are used to model a detection function: the probability of detecting an animal, given
its distance from the line or point. The probability of detecting an animal in the
covered area is given by the mean value of the detection function with respect to
the available distances to be detected. Given this probability, a Horvitz-Thompson-
like estimator of abundance for the covered area follows, hence using a model-based
framework. Inferences for the wider survey region are justified using the survey design.
Conventional distance sampling methods are based on a set of assumptions. In
this thesis I present results that extend distance sampling on two fronts.
Firstly, estimators are derived for situations in which there is measurement error in
the distances. These estimators use information about the measurement error in two
ways: (1) a biased estimator based on the contaminated distances is multiplied by an
appropriate correction factor, which is a function of the errors (PDF approach), and
(2) cast into a likelihood framework that allows parameter estimation in the presence
of measurement error (likelihood approach).
Secondly, methods are developed that relax the conventional assumption that the
distribution of animals is independent of distance from the lines or points (usually
guaranteed by appropriate survey design). In particular, the new methods deal with
the case where animal density gradients are caused by the use of non-random sampler
allocation, for example transects placed along linear features such as roads or streams.
This is dealt with separately for line and point transects, and at a later stage an
approach for combining the two is presented.
A considerable number of simulations and example analysis illustrate the performance of the proposed methods.
Thu, 01 Nov 2007 00:00:00 GMThttp://hdl.handle.net/10023/3912007-11-01T00:00:00ZMarques, Tiago Andre Lamas OliveiraDistance sampling is one of the most commonly used methods for estimating density
and abundance. Conventional methods are based on the distances of detected animals
from the center of point transects or the center line of line transects. These distances
are used to model a detection function: the probability of detecting an animal, given
its distance from the line or point. The probability of detecting an animal in the
covered area is given by the mean value of the detection function with respect to
the available distances to be detected. Given this probability, a Horvitz-Thompson-
like estimator of abundance for the covered area follows, hence using a model-based
framework. Inferences for the wider survey region are justified using the survey design.
Conventional distance sampling methods are based on a set of assumptions. In
this thesis I present results that extend distance sampling on two fronts.
Firstly, estimators are derived for situations in which there is measurement error in
the distances. These estimators use information about the measurement error in two
ways: (1) a biased estimator based on the contaminated distances is multiplied by an
appropriate correction factor, which is a function of the errors (PDF approach), and
(2) cast into a likelihood framework that allows parameter estimation in the presence
of measurement error (likelihood approach).
Secondly, methods are developed that relax the conventional assumption that the
distribution of animals is independent of distance from the lines or points (usually
guaranteed by appropriate survey design). In particular, the new methods deal with
the case where animal density gradients are caused by the use of non-random sampler
allocation, for example transects placed along linear features such as roads or streams.
This is dealt with separately for line and point transects, and at a later stage an
approach for combining the two is presented.
A considerable number of simulations and example analysis illustrate the performance of the proposed methods.A Bayesian approach to modelling field data on multi-species predator prey-interactions
http://hdl.handle.net/10023/174
Multi-species functional response models are required to model the predation of generalist preda-
tors, which consume more than one prey species. In chapter 2, a new model for the multi-species
functional response is presented. This model can describe generalist predators that exhibit func-
tional responses of Holling type II to some of their prey and of type III to other prey. In chapter
3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and
show how Bayesian statistics are particularly well-suited for the fitting of functional response
models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi-
species functional response model is fitted to field data on two generalist predators: the hen
harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous
Bayesian model of the multi-species functional response that has been fitted to field data.
The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities
of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to
the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The
impact of using Bayesian or frequentist models on the resulting functional response is discussed.
In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible
reasons are discussed, including poor data quality or a lack of relevance of the available data for
informing a behavioural functional response model.
I conclude with a comparison of the role that functional responses play in behavioural, population
and community ecology and emphasise the need for further research into unifying these different
approaches to understanding predation with particular reference to predator movement.
In an appendix, I evaluate the possibility of using a functional response for inferring the abun-
dances of prey species from performance indicators of generalist predators feeding on these prey.
I argue that this approach may be futile in general, because a generalist predator's energy intake
does not depend on the density of any single of its prey, so that the possibly unknown densities
of all prey need to be taken into account.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10023/1742006-01-01T00:00:00ZAsseburg, ChristianMulti-species functional response models are required to model the predation of generalist preda-
tors, which consume more than one prey species. In chapter 2, a new model for the multi-species
functional response is presented. This model can describe generalist predators that exhibit func-
tional responses of Holling type II to some of their prey and of type III to other prey. In chapter
3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and
show how Bayesian statistics are particularly well-suited for the fitting of functional response
models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi-
species functional response model is fitted to field data on two generalist predators: the hen
harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous
Bayesian model of the multi-species functional response that has been fitted to field data.
The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities
of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to
the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The
impact of using Bayesian or frequentist models on the resulting functional response is discussed.
In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible
reasons are discussed, including poor data quality or a lack of relevance of the available data for
informing a behavioural functional response model.
I conclude with a comparison of the role that functional responses play in behavioural, population
and community ecology and emphasise the need for further research into unifying these different
approaches to understanding predation with particular reference to predator movement.
In an appendix, I evaluate the possibility of using a functional response for inferring the abun-
dances of prey species from performance indicators of generalist predators feeding on these prey.
I argue that this approach may be futile in general, because a generalist predator's energy intake
does not depend on the density of any single of its prey, so that the possibly unknown densities
of all prey need to be taken into account.Reconstruction of foliations from directional information
http://hdl.handle.net/10023/158
In many areas of science, especially geophysics, geography and
meteorology, the data are often directions or axes rather than
scalars or unrestricted vectors. Directional statistics considers
data which are mainly unit vectors lying in two- or
three-dimensional space (R² or R³). One
way in which directional data arise is as normals to foliations. A
(codimension-1) foliation of {R}^{d} is a system
of non-intersecting (d-1)-dimensional surfaces filling out the
whole of {R}^{d}. At each point z of {R}^{d}, any given codimension-1 foliation determines a
unit vector v normal to the surface through z.
The problem considered here is that of reconstructing the foliation
from observations ({z}{i}, {v}{i}), i=1,...,n. One
way of doing this is rather similar to fitting smooth splines to
data. That is, the reconstructed foliation has to be as close to the
data as possible, while the foliation itself is not too rough. A
tradeoff parameter is introduced to control the balance between
smoothness and
closeness. The approach used in this thesis is to take the surfaces to be
surfaces of constant values of a suitable real-valued function h
on {R}^{d}. The problem of reconstructing a foliation is
translated into the language of Schwartz distributions and a deep
result in the theory of distributions is used to give the
appropriate general form of the fitted function h. The model
parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals
are developed and estimates of concentration are given.
Fri, 01 Jun 2007 00:00:00 GMThttp://hdl.handle.net/10023/1582007-06-01T00:00:00ZYeh, Shu-YingIn many areas of science, especially geophysics, geography and
meteorology, the data are often directions or axes rather than
scalars or unrestricted vectors. Directional statistics considers
data which are mainly unit vectors lying in two- or
three-dimensional space (R² or R³). One
way in which directional data arise is as normals to foliations. A
(codimension-1) foliation of {R}^{d} is a system
of non-intersecting (d-1)-dimensional surfaces filling out the
whole of {R}^{d}. At each point z of {R}^{d}, any given codimension-1 foliation determines a
unit vector v normal to the surface through z.
The problem considered here is that of reconstructing the foliation
from observations ({z}{i}, {v}{i}), i=1,...,n. One
way of doing this is rather similar to fitting smooth splines to
data. That is, the reconstructed foliation has to be as close to the
data as possible, while the foliation itself is not too rough. A
tradeoff parameter is introduced to control the balance between
smoothness and
closeness. The approach used in this thesis is to take the surfaces to be
surfaces of constant values of a suitable real-valued function h
on {R}^{d}. The problem of reconstructing a foliation is
translated into the language of Schwartz distributions and a deep
result in the theory of distributions is used to give the
appropriate general form of the fitted function h. The model
parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals
are developed and estimates of concentration are given.