Statistics
http://hdl.handle.net/10023/95
Thu, 25 Apr 2019 15:44:01 GMT2019-04-25T15:44:01ZStatisticshttps://research-repository.st-andrews.ac.uk:443/bitstream/id/dd6a3f38-2d52-4515-927d-3c70e82761a1/Mathematics and statistics.gif
http://hdl.handle.net/10023/95
Loggerhead turtle Caretta caretta density and abundance in Chesapeake Bay and the temperate ocean waters of the southern portion of the Mid-Atlantic Bight
http://hdl.handle.net/10023/16833
We conducted aerial surveys of sea turtles in 2011 and 2012, incorporating corrections for perception and availability bias in Chesapeake Bay and near-shore continental shelf waters of the Mid-Atlantic Bight off the US states of Virginia and Maryland. Results of these surveys and ancillary research to determine surface times for loggerhead turtles provide us with a new baseline population estimate for turtles in the region. Prior surveys were conducted in Chesapeake Bay in the mid-1980s and early 2000s, and in ocean waters in the late 1970s and early 1980s. Although comparison of density estimates not corrected for availability between prior surveys and this effort suggests that the population of sea turtles, especially loggerhead turtles, is higher than previous estimates, differences between surveys may be the result of survey methodologies and cannot be assumed to be true changes in density. Surface time for availability corrections was calculated using dive summaries from satellite telemetry on 27 loggerhead turtles tracked between 2011 and 2015. We calculated stratified seasonal availability corrections for bay and ocean waters based on assumed differences in turtle behavior and water clarity between the 2 habitats. For each habitat, we provided seasonal corrections for 3 detection depth bins (shallow, moderate, and deep) to account for differences in sub-surface detection ranges. Differences and trends toward differences among availability corrections underscore the need to better understand the many variables that affect surface time for sea turtles in temperate waters, and the effect that availability has on abundance and density estimates.
Funding was provided by the NOAA Species Recovery Grants to States program (Award #NA 47200033) issued to the Virginia Department of Game and Inland Fisheries which contracted with the Virginia Aquarium & Marine Science Center Foundation. Additional funding for tags and turtle capture was also provided by US Fleet Forces Command as well as the Virginia Aquarium Batten Collaborative Research Fund and Batten Professional Development Fund.
Thu, 13 Dec 2018 00:00:00 GMThttp://hdl.handle.net/10023/168332018-12-13T00:00:00ZBarco, Susan G.Burt, M. LouiseDiGiovanni, Robert A.Swingle, W. MarkWilliard, Amanda S.We conducted aerial surveys of sea turtles in 2011 and 2012, incorporating corrections for perception and availability bias in Chesapeake Bay and near-shore continental shelf waters of the Mid-Atlantic Bight off the US states of Virginia and Maryland. Results of these surveys and ancillary research to determine surface times for loggerhead turtles provide us with a new baseline population estimate for turtles in the region. Prior surveys were conducted in Chesapeake Bay in the mid-1980s and early 2000s, and in ocean waters in the late 1970s and early 1980s. Although comparison of density estimates not corrected for availability between prior surveys and this effort suggests that the population of sea turtles, especially loggerhead turtles, is higher than previous estimates, differences between surveys may be the result of survey methodologies and cannot be assumed to be true changes in density. Surface time for availability corrections was calculated using dive summaries from satellite telemetry on 27 loggerhead turtles tracked between 2011 and 2015. We calculated stratified seasonal availability corrections for bay and ocean waters based on assumed differences in turtle behavior and water clarity between the 2 habitats. For each habitat, we provided seasonal corrections for 3 detection depth bins (shallow, moderate, and deep) to account for differences in sub-surface detection ranges. Differences and trends toward differences among availability corrections underscore the need to better understand the many variables that affect surface time for sea turtles in temperate waters, and the effect that availability has on abundance and density estimates.Accounting for preferential sampling in species distribution models
http://hdl.handle.net/10023/16797
Species distribution models (SDMs) are now being widely used in ecology for management and conservation purposes across terrestrial, freshwater, and marine realms. The increasing interest in SDMs has drawn the attention of ecologists to spatial models and, in particular, to geostatistical models, which are used to associate observations of species occurrence or abundance with environmental covariates in a finite number of locations in order to predict where (and how much of) a species is likely to be present in unsampled locations. Standard geostatistical methodology assumes that the choice of sampling locations is independent of the values of the variable of interest. However, in natural environments, due to practical limitations related to time and financial constraints, this theoretical assumption is often violated. In fact, data commonly derive from opportunistic sampling (e.g., whale or bird watching), in which observers tend to look for a specific species in areas where they expect to find it. These are examples of what is referred to as preferential sampling, which can lead to biased predictions of the distribution of the species. The aim of this study is to discuss a SDM that addresses this problem and that it is more computationally efficient than existing MCMC methods. From a statistical point of view, we interpret the data as a marked point pattern, where the sampling locations form a point pattern and the measurements taken in those locations (i.e., species abundance or occurrence) are the associated marks. Inference and prediction of species distribution is performed using a Bayesian approach, and integrated nested Laplace approximation (INLA) methodology and software are used for model fitting to minimize the computational burden. We show that abundance is highly overestimated at low abundance locations when preferential sampling effects not accounted for, in both a simulated example and a practical application using fishery data. This highlights that ecologists should be aware of the potential bias resulting from preferential sampling and account for it in a model when a survey is based on non‐randomized and/or non‐systematic sampling.
D. C., A. L. Q. and F. M. would like to thank the Ministerio de Educación y Ciencia (Spain) for financial support (jointly financed by the European Regional Development Fund) via Research Grants MTM2013‐42323‐P and MTM2016‐77501‐P, and ACOMP/2015/202 from Generalitat Valenciana (Spain).
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10023/167972019-01-01T00:00:00ZPennino, Maria GraziaParadinas, IosuIllian, Janine B.Muñoz, FacundoBellido, José MaríaLópez-Quílez, AntonioConesa, DavidSpecies distribution models (SDMs) are now being widely used in ecology for management and conservation purposes across terrestrial, freshwater, and marine realms. The increasing interest in SDMs has drawn the attention of ecologists to spatial models and, in particular, to geostatistical models, which are used to associate observations of species occurrence or abundance with environmental covariates in a finite number of locations in order to predict where (and how much of) a species is likely to be present in unsampled locations. Standard geostatistical methodology assumes that the choice of sampling locations is independent of the values of the variable of interest. However, in natural environments, due to practical limitations related to time and financial constraints, this theoretical assumption is often violated. In fact, data commonly derive from opportunistic sampling (e.g., whale or bird watching), in which observers tend to look for a specific species in areas where they expect to find it. These are examples of what is referred to as preferential sampling, which can lead to biased predictions of the distribution of the species. The aim of this study is to discuss a SDM that addresses this problem and that it is more computationally efficient than existing MCMC methods. From a statistical point of view, we interpret the data as a marked point pattern, where the sampling locations form a point pattern and the measurements taken in those locations (i.e., species abundance or occurrence) are the associated marks. Inference and prediction of species distribution is performed using a Bayesian approach, and integrated nested Laplace approximation (INLA) methodology and software are used for model fitting to minimize the computational burden. We show that abundance is highly overestimated at low abundance locations when preferential sampling effects not accounted for, in both a simulated example and a practical application using fishery data. This highlights that ecologists should be aware of the potential bias resulting from preferential sampling and account for it in a model when a survey is based on non‐randomized and/or non‐systematic sampling.Estimation of population size when capture probability depends on individual states
http://hdl.handle.net/10023/16735
We develop a multi-state model to estimate the size of a closed population from capture–recapture studies. We consider the case where capture–recapture data are not of a simple binary form, but where the state of an individual is also recorded upon every capture as a discrete variable. The proposed multi-state model can be regarded as a generalisation of the commonly applied set of closed population models to a multi-state form. The model allows for heterogeneity within the capture probabilities associated with each state while also permitting individuals to move between the different discrete states. A closed-form expression for the likelihood is presented in terms of a set of sufficient statistics. The link between existing models for capture heterogeneity is established, and simulation is used to show that the estimate of population size can be biased when movement between states is not accounted for. The proposed unconditional approach is also compared to a conditional approach to assess estimation bias. The model derived in this paper is motivated by a real ecological data set on great crested newts, Triturus cristatus.
Funding: Carnegie Trust for the Universities of Scotland, UK Engineering and Physical Sciences Research Council (EP/10009171/1), UK Natural Environment Research Council (NE/J018473/1)
Fri, 01 Mar 2019 00:00:00 GMThttp://hdl.handle.net/10023/167352019-03-01T00:00:00ZWorthington, HannahMcCrea, RachelKing, RuthGriffiths, RichardWe develop a multi-state model to estimate the size of a closed population from capture–recapture studies. We consider the case where capture–recapture data are not of a simple binary form, but where the state of an individual is also recorded upon every capture as a discrete variable. The proposed multi-state model can be regarded as a generalisation of the commonly applied set of closed population models to a multi-state form. The model allows for heterogeneity within the capture probabilities associated with each state while also permitting individuals to move between the different discrete states. A closed-form expression for the likelihood is presented in terms of a set of sufficient statistics. The link between existing models for capture heterogeneity is established, and simulation is used to show that the estimate of population size can be biased when movement between states is not accounted for. The proposed unconditional approach is also compared to a conditional approach to assess estimation bias. The model derived in this paper is motivated by a real ecological data set on great crested newts, Triturus cristatus.Title redacted
http://hdl.handle.net/10023/16693
Mon, 20 Nov 2017 00:00:00 GMThttp://hdl.handle.net/10023/166932017-11-20T00:00:00ZErichson, N. BenjaminIncorporating animal movement with distance sampling and spatial capture-recapture
http://hdl.handle.net/10023/16467
Distance sampling and spatial capture-recapture are statistical methods to estimate the
number of animals in a wild population based on encounters between these animals and
scientific detectors. Both methods estimate the probability an animal is detected during a
survey, but do not explicitly model animal movement.
The primary challenge is that animal movement in these surveys is unobserved; one must
average over all possible paths each animal could have travelled during the survey. In this
thesis, a general statistical model, with distance sampling and spatial capture-recapture
as special cases, is presented that explicitly incorporates animal movement. An efficient
algorithm to integrate over all possible movement paths, based on quadrature and hidden
Markov modelling, is given to overcome the computational obstacles.
For distance sampling, simulation studies and case studies show that incorporating animal
movement can reduce the bias in estimated abundance found in conventional models and
expand application of distance sampling to surveys that violate the assumption of no animal
movement. For spatial capture-recapture, continuous-time encounter records are used to
make detailed inference on where animals spend their time during the survey. In surveys
conducted in discrete occasions, maximum likelihood models that allow for mobile activity
centres are presented to account for transience, dispersal, and heterogeneous space use.
These methods provide an alternative when animal movement causes bias in standard methods and the opportunity to gain richer inference on how animals move, where they spend
their time, and how they interact.
Thu, 06 Dec 2018 00:00:00 GMThttp://hdl.handle.net/10023/164672018-12-06T00:00:00ZGlennie, RichardDistance sampling and spatial capture-recapture are statistical methods to estimate the
number of animals in a wild population based on encounters between these animals and
scientific detectors. Both methods estimate the probability an animal is detected during a
survey, but do not explicitly model animal movement.
The primary challenge is that animal movement in these surveys is unobserved; one must
average over all possible paths each animal could have travelled during the survey. In this
thesis, a general statistical model, with distance sampling and spatial capture-recapture
as special cases, is presented that explicitly incorporates animal movement. An efficient
algorithm to integrate over all possible movement paths, based on quadrature and hidden
Markov modelling, is given to overcome the computational obstacles.
For distance sampling, simulation studies and case studies show that incorporating animal
movement can reduce the bias in estimated abundance found in conventional models and
expand application of distance sampling to surveys that violate the assumption of no animal
movement. For spatial capture-recapture, continuous-time encounter records are used to
make detailed inference on where animals spend their time during the survey. In surveys
conducted in discrete occasions, maximum likelihood models that allow for mobile activity
centres are presented to account for transience, dispersal, and heterogeneous space use.
These methods provide an alternative when animal movement causes bias in standard methods and the opportunity to gain richer inference on how animals move, where they spend
their time, and how they interact.Effects of neonicotinoids on bees: an invalid experiment
http://hdl.handle.net/10023/16444
We use a recent study on the effects of neonicotinoids on bees as a concrete example to reinforce the advice of OEPP/EPPO (2010) that strong inference is impossible if there is no true replication and that analyses based on pseudoreplication are invalid.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/164442018-01-01T00:00:00ZBailey, R. A.Greenwood, J. J. D.We use a recent study on the effects of neonicotinoids on bees as a concrete example to reinforce the advice of OEPP/EPPO (2010) that strong inference is impossible if there is no true replication and that analyses based on pseudoreplication are invalid.Techniques for estimating the size of low-density gopher tortoise populations
http://hdl.handle.net/10023/16434
Gopher tortoises (Gopherus polyphemus) are candidates for range-wide listing as threatened under the U.S. Endangered Species Act. Reliable population estimates are important to inform policy and management for recovery of the species. Line transect distance sampling has been adopted as the preferred method to estimate population size. However, when tortoise density is low, it can be challenging to obtain enough tortoise observations to reliably estimate the probability of detection, a vital component of the method. We suggest a modification to the method based on counting usable tortoise burrows (more abundant than tortoises) and separately accounting for the proportion of burrows occupied by tortoises. The increased sample size of burrows can outweigh the additional uncertainty induced by the need to account for the proportion of burrows occupied. We demonstrate the method using surveys conducted within a 13,118-ha portion of the Gopher Tortoise Habitat Management Unit at Fort Gordon Army Installation, Georgia. We used a systematic random design to obtain more precise estimates, using a newly developed systematic variance estimator. Individual transects had a spatially efficient design (pseudocircuits), which greatly improved sampling efficiency on this large site. Estimated burrow density was 0.091 ± 0.011 burrows/ha (CV = 12.6%, 95% CI = 0.071–0.116), with 25% of burrows occupied by a tortoise (CV = 14.4%), yielding a tortoise density of 0.023 ± 0.004 tortoise/ha (CV = 19.0%, 95% CI = 0.016–0.033) and a population estimate of 297 tortoises (95% CI = 210–433). These techniques are applicable to other studies and species. Surveying burrows or nests, rather than animals, can produce more reliable estimates when it leads to a significantly larger sample of detections and when the occupancy status can reliably be ascertained. Systematic line transect survey designs give better precision and are practical to implement and analyze.
Partial support by CEAUL (funded by Fundação para a Ciência e a Tecnologia, Portugal, through the project UID/MAT/00006/2013) (TAM).
Fri, 01 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/164342017-12-01T00:00:00ZStober, Jonathan M.Prieto-Gonzalez, RocioSmith, Lora L.Marques, Tiago A.Thomas, LenGopher tortoises (Gopherus polyphemus) are candidates for range-wide listing as threatened under the U.S. Endangered Species Act. Reliable population estimates are important to inform policy and management for recovery of the species. Line transect distance sampling has been adopted as the preferred method to estimate population size. However, when tortoise density is low, it can be challenging to obtain enough tortoise observations to reliably estimate the probability of detection, a vital component of the method. We suggest a modification to the method based on counting usable tortoise burrows (more abundant than tortoises) and separately accounting for the proportion of burrows occupied by tortoises. The increased sample size of burrows can outweigh the additional uncertainty induced by the need to account for the proportion of burrows occupied. We demonstrate the method using surveys conducted within a 13,118-ha portion of the Gopher Tortoise Habitat Management Unit at Fort Gordon Army Installation, Georgia. We used a systematic random design to obtain more precise estimates, using a newly developed systematic variance estimator. Individual transects had a spatially efficient design (pseudocircuits), which greatly improved sampling efficiency on this large site. Estimated burrow density was 0.091 ± 0.011 burrows/ha (CV = 12.6%, 95% CI = 0.071–0.116), with 25% of burrows occupied by a tortoise (CV = 14.4%), yielding a tortoise density of 0.023 ± 0.004 tortoise/ha (CV = 19.0%, 95% CI = 0.016–0.033) and a population estimate of 297 tortoises (95% CI = 210–433). These techniques are applicable to other studies and species. Surveying burrows or nests, rather than animals, can produce more reliable estimates when it leads to a significantly larger sample of detections and when the occupancy status can reliably be ascertained. Systematic line transect survey designs give better precision and are practical to implement and analyze.Surveying abundance and stand type associations of Formica aquilonia and F. lugubris (Hymenoptera: Formicidae) nest mounds over an extensive area : Trialing a novel method
http://hdl.handle.net/10023/16260
Red wood ants are ecologically important members of woodland communities, and some species are of conservation concern. They occur commonly only in certain habitats in Britain, but there is limited knowledge of their numbers and distribution. This study provided baseline information at a key locality (Abernethy Forest, 37 km2) in the central Highlands of Scotland and trialed a new method of surveying red wood ant density and stand type associations: a distance sampling line transect survey of nests. This method is efficient because it allows an observer to quickly survey a large area either side of transect lines, without having to assume that all nests are detected. Instead, data collected on the distance of nests from the line are used to estimate probability of detection and the effective transect width, using the free software "Distance". Surveys took place in August and September 2003 along a total of 71.2 km of parallel, equally-spaced transects. One hundred and forty-four red wood ant nests were located, comprising 89 F. aquilonia (Yarrow, 1955) and 55 F. lugubris (Zetterstedt, 1838) nests. Estimated densities were 1.13 nests per hectare (95% CI 0.74-1.73) for F. aquilonia and 0.83 nests per hectare (95% CI 0.32-2.17) for F. lugubris. These translated to total estimated nest numbers of 4,200 (95% CI 2,700-6,400) and 3,100 (95% CI 1,200-8,100), respectively, for the whole forest. Indices of stand selection indicated that F. aquilonia had some positive association with old-growth and F. lugubris with younger stands (stem exclusion stage). No nests were found in areas that had been clear-felled, and ploughed and planted in the 1970s-1990s. The pattern of stand type association and hence distribution of F. aquilonia and F. lugubris may be due to the differing ability to disperse (F. lugubris is the faster disperser) and compete (F. aquilonia is competitively superior). We recommend using line transect sampling for extensive surveys of ants that construct nest mounds to estimate abundance and stand type association.
Tue, 03 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/162602012-01-03T00:00:00ZBorkin, KerrySummers, RonThomas, LenRed wood ants are ecologically important members of woodland communities, and some species are of conservation concern. They occur commonly only in certain habitats in Britain, but there is limited knowledge of their numbers and distribution. This study provided baseline information at a key locality (Abernethy Forest, 37 km2) in the central Highlands of Scotland and trialed a new method of surveying red wood ant density and stand type associations: a distance sampling line transect survey of nests. This method is efficient because it allows an observer to quickly survey a large area either side of transect lines, without having to assume that all nests are detected. Instead, data collected on the distance of nests from the line are used to estimate probability of detection and the effective transect width, using the free software "Distance". Surveys took place in August and September 2003 along a total of 71.2 km of parallel, equally-spaced transects. One hundred and forty-four red wood ant nests were located, comprising 89 F. aquilonia (Yarrow, 1955) and 55 F. lugubris (Zetterstedt, 1838) nests. Estimated densities were 1.13 nests per hectare (95% CI 0.74-1.73) for F. aquilonia and 0.83 nests per hectare (95% CI 0.32-2.17) for F. lugubris. These translated to total estimated nest numbers of 4,200 (95% CI 2,700-6,400) and 3,100 (95% CI 1,200-8,100), respectively, for the whole forest. Indices of stand selection indicated that F. aquilonia had some positive association with old-growth and F. lugubris with younger stands (stem exclusion stage). No nests were found in areas that had been clear-felled, and ploughed and planted in the 1970s-1990s. The pattern of stand type association and hence distribution of F. aquilonia and F. lugubris may be due to the differing ability to disperse (F. lugubris is the faster disperser) and compete (F. aquilonia is competitively superior). We recommend using line transect sampling for extensive surveys of ants that construct nest mounds to estimate abundance and stand type association.Crambled : a Shiny application to enable intuitive resolution of conflicting cellularity estimates
http://hdl.handle.net/10023/16248
It is now commonplace to investigate tumour samples using whole-genome sequencing, and some commonly performed tasks are the estimation of cellularity (or sample purity), the genome-wide profiling of copy numbers, and the assessment of sub-clonal behaviours. Several tools are available to undertake these tasks, but often give conflicting results - not least because there is often genuine uncertainty due to a lack of model identifiability. Presented here is a tool, "Crambled", that allows for an intuitive visual comparison of the conflicting solutions. Crambled is implemented as a Shiny application within R, and is accompanied by example images from two use cases (one tumour sample with matched normal sequencing, and one standalone cell line example) as well as functions to generate the necessary images from any sequencing data set. Through the use of Crambled, a user may gain insight into why each tool has offered its given solution and combined with a knowledge of the disease being studied can choose between the competing solutions in an informed manner.
Mon, 07 Dec 2015 00:00:00 GMThttp://hdl.handle.net/10023/162482015-12-07T00:00:00ZLynch, AndyIt is now commonplace to investigate tumour samples using whole-genome sequencing, and some commonly performed tasks are the estimation of cellularity (or sample purity), the genome-wide profiling of copy numbers, and the assessment of sub-clonal behaviours. Several tools are available to undertake these tasks, but often give conflicting results - not least because there is often genuine uncertainty due to a lack of model identifiability. Presented here is a tool, "Crambled", that allows for an intuitive visual comparison of the conflicting solutions. Crambled is implemented as a Shiny application within R, and is accompanied by example images from two use cases (one tumour sample with matched normal sequencing, and one standalone cell line example) as well as functions to generate the necessary images from any sequencing data set. Through the use of Crambled, a user may gain insight into why each tool has offered its given solution and combined with a knowledge of the disease being studied can choose between the competing solutions in an informed manner.Title redacted
http://hdl.handle.net/10023/15909
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/159092017-06-23T00:00:00ZMillar, Colin PearsonDetermining the behavioural dose-response relationship of marine mammals to air gun noise and source proximity
http://hdl.handle.net/10023/15827
The effect of various anthropogenic sources of noise (e.g. sonar, seismic surveys) on the behaviour of marine mammals is sometimes quantified as a dosetextendashresponse relationship, where the probability of an animal behaviourally 'responding' (e.g. avoiding the source) increases with 'dose' (or received level of noise). To do this, however, requires a definition of a 'significant' response (avoidance), which can be difficult to quantify. There is also the potential that the animal 'avoids' not only the source of noise but also the vessel operating the source, complicating the relationship. The proximity of the source is an important variable to consider in the response, yet difficult to account for given that received level and proximity are highly correlated. This study used the behavioural response of humpback whales to noise from two different air gun arrays (20 and 140 cubic inch air gun array) to determine whether a dosetextendashresponse relationship existed. To do this, a measure of avoidance of the source was developed, and the magnitude (rather than probability) of this response was tested against dose. The proximity to the source, and the vessel itself, was included within the one-analysis model. Humpback whales were more likely to avoid the air gun arrays (but not the controls) within 3 km of the source at levels over 140 re. 1 μPa2 s-1, meaning that both the proximity and the received level were important factors and the relationship between dose (received level) and response is not a simple one.
Funding was provided as part of Joint Industry Programme on E&P Sound and Marine Life, managed by the International Association of Oil & Gas Producers (IOGP). The principal contributing companies to the programme are BG group, BHP Billiton, Chevron, ConocoPhillips, Eni, ExxonMobil, IAGC, Santos, Statoil and Woodside. The US Bureau of Ocean Energy Management (BOEM), Origin Energy, Beach Energy and AWE Limited provided support specifically for the BRAHSS study.
Wed, 16 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/158272017-08-16T00:00:00ZDunlop, Rebecca A.Noad, Michael J.McCauley, Robert D.Scott-Hayward, Lindesay Alexandra SarahKniest, EricSlade, RobertPaton, DavidCato, Douglas H.The effect of various anthropogenic sources of noise (e.g. sonar, seismic surveys) on the behaviour of marine mammals is sometimes quantified as a dosetextendashresponse relationship, where the probability of an animal behaviourally 'responding' (e.g. avoiding the source) increases with 'dose' (or received level of noise). To do this, however, requires a definition of a 'significant' response (avoidance), which can be difficult to quantify. There is also the potential that the animal 'avoids' not only the source of noise but also the vessel operating the source, complicating the relationship. The proximity of the source is an important variable to consider in the response, yet difficult to account for given that received level and proximity are highly correlated. This study used the behavioural response of humpback whales to noise from two different air gun arrays (20 and 140 cubic inch air gun array) to determine whether a dosetextendashresponse relationship existed. To do this, a measure of avoidance of the source was developed, and the magnitude (rather than probability) of this response was tested against dose. The proximity to the source, and the vessel itself, was included within the one-analysis model. Humpback whales were more likely to avoid the air gun arrays (but not the controls) within 3 km of the source at levels over 140 re. 1 μPa2 s-1, meaning that both the proximity and the received level were important factors and the relationship between dose (received level) and response is not a simple one.Long-time analytic approximation of large stochastic oscillators : simulation, analysis and inference
http://hdl.handle.net/10023/15761
In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA) overcomes the main limitations of the standard Linear Noise Approximation (LNA) to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.
This research was funded by the BBSRC Grant BB/K003097/1 (Systems Biology Analysis of Biological Timers and Inflammation). DAR was also supported by funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 305564. BBSRC web site: www.bbsrc.ac.uk Seventh Framework Programme (FP7) website: cordis.europa.eu/fp7/home_en.html.
Mon, 24 Jul 2017 00:00:00 GMThttp://hdl.handle.net/10023/157612017-07-24T00:00:00ZMinas, GiorgosRand, David A.In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA) overcomes the main limitations of the standard Linear Noise Approximation (LNA) to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.Adaptive multivariate global testing
http://hdl.handle.net/10023/15760
We present a methodology for dealing with recent challenges in testing global hypotheses using multivariate observations. The proposed tests target situations, often arising in emerging applications of neuroimaging, where the sample size n is relatively small compared with the observations' dimension K. We employ adaptive designs allowing for sequential modifications of the test statistics adapting to accumulated data. The adaptations are optimal in the sense of maximizing the predictive power of the test at each interim analysis while still controlling the Type I error. Optimality is obtained by a general result applicable to typical adaptive design settings. Further, we prove that the potentially high-dimensional design space of the tests can be reduced to a low-dimensional projection space enabling us to perform simpler power analysis studies, including comparisons to alternative tests. We illustrate the substantial improvement in efficiency that the proposed tests can make over standard tests, especially in the case of n smaller or slightly larger than K. The methods are also studied empirically using both simulated data and data from an EEG study, where the use of prior knowledge substantially increases the power of the test. Supplementary materials for this article are available online.
Sun, 01 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/157602014-06-01T00:00:00ZMinas, GiorgosAston, John A DStallard, NigelWe present a methodology for dealing with recent challenges in testing global hypotheses using multivariate observations. The proposed tests target situations, often arising in emerging applications of neuroimaging, where the sample size n is relatively small compared with the observations' dimension K. We employ adaptive designs allowing for sequential modifications of the test statistics adapting to accumulated data. The adaptations are optimal in the sense of maximizing the predictive power of the test at each interim analysis while still controlling the Type I error. Optimality is obtained by a general result applicable to typical adaptive design settings. Further, we prove that the potentially high-dimensional design space of the tests can be reduced to a low-dimensional projection space enabling us to perform simpler power analysis studies, including comparisons to alternative tests. We illustrate the substantial improvement in efficiency that the proposed tests can make over standard tests, especially in the case of n smaller or slightly larger than K. The methods are also studied empirically using both simulated data and data from an EEG study, where the use of prior knowledge substantially increases the power of the test. Supplementary materials for this article are available online.Inferring transcriptional logic from multiple dynamic experiments
http://hdl.handle.net/10023/15758
Motivation: The availability of more data of dynamic gene expression under multiple experimental conditions provides new information that makes the key goal of identifying not only the transcriptional regulators of a gene but also the underlying logical structure attainable. Results: We propose a novel method for inferring transcriptional regulation using a simple, yet biologically interpretable, model to find the logic by which a set of candidate genes and their associated transcription factors (TFs) regulate the transcriptional process of a gene of interest. Our dynamic model links the mRNA transcription rate of the target gene to the activation states of the TFs assuming that these interactions are consistent across multiple experiments and over time. A trans-dimensional Markov Chain Monte Carlo (MCMC) algorithm is used to efficiently sample the regulatory logic under different combinations of parents and rank the estimated models by their posterior probabilities. We demonstrate and compare our methodology with other methods using simulation examples and apply it to a study of transcriptional regulation of selected target genes of Arabidopsis Thaliana from microarray time series data obtained under multiple biotic stresses. We show that our method is able to detect complex regulatory interactions that are consistent under multiple experimental conditions. Availability and implementation: Programs are written in MATLAB and Statistics Toolbox Release 2016b, The MathWorks, Inc., Natick, Massachusetts, United States and are available on GitHub https://github.com/giorgosminas/TRS and at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software.
This work was supported by the Biotechnology and Biological Sciences Research Council [BB/F005806/1, BB/K003097/1], the Engineering and Physical Sciences Research Council [EP/C544587/1 to DAR] and the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 305564.
Wed, 01 Nov 2017 00:00:00 GMThttp://hdl.handle.net/10023/157582017-11-01T00:00:00ZMinas, GiorgosJenkins, Dafyd J.Rand, David A.Finkenstädt, BärbelMotivation: The availability of more data of dynamic gene expression under multiple experimental conditions provides new information that makes the key goal of identifying not only the transcriptional regulators of a gene but also the underlying logical structure attainable. Results: We propose a novel method for inferring transcriptional regulation using a simple, yet biologically interpretable, model to find the logic by which a set of candidate genes and their associated transcription factors (TFs) regulate the transcriptional process of a gene of interest. Our dynamic model links the mRNA transcription rate of the target gene to the activation states of the TFs assuming that these interactions are consistent across multiple experiments and over time. A trans-dimensional Markov Chain Monte Carlo (MCMC) algorithm is used to efficiently sample the regulatory logic under different combinations of parents and rank the estimated models by their posterior probabilities. We demonstrate and compare our methodology with other methods using simulation examples and apply it to a study of transcriptional regulation of selected target genes of Arabidopsis Thaliana from microarray time series data obtained under multiple biotic stresses. We show that our method is able to detect complex regulatory interactions that are consistent under multiple experimental conditions. Availability and implementation: Programs are written in MATLAB and Statistics Toolbox Release 2016b, The MathWorks, Inc., Natick, Massachusetts, United States and are available on GitHub https://github.com/giorgosminas/TRS and at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software.ReTrOS : a MATLAB toolbox for reconstructing transcriptional activity from gene and protein expression data
http://hdl.handle.net/10023/15759
BACKGROUND: Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS-Smooth and ReTrOS-Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation. RESULTS: The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions. CONCLUSIONS: The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.
This work was supported through providing funds by the Biotechnology and Biological Sciences Research Council [BB/F005806/1, BB/F005237/1]; and the Engineering and Physical Sciences Research Council [EP/C544587/1 to DAR].
Mon, 26 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/157592017-06-26T00:00:00ZMinas, GiorgosMomiji, HiroshiJenkins, Dafyd JCosta, Maria JRand, David AFinkenstädt, BärbelBACKGROUND: Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS-Smooth and ReTrOS-Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation. RESULTS: The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions. CONCLUSIONS: The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.Incorporating animal movement into circular plot and point transect surveys of wildlife abundance
http://hdl.handle.net/10023/15612
Estimating wildlife abundance is fundamental for its effective management and conservation.
A range of methods exist: total counts, plot sampling, distance sampling and
capture-recapture based approaches. Methods have assumptions and their failure can
lead to substantial bias. Current research in the field is focused not on establishing new
methods but in extending existing methods to deal with their assumptions' violation.
This thesis focus on incorporating animal movement into circular plot sampling (CPS)
and point transect sampling (PTS), where a key assumption is that animals do not move
while within detection range, i.e., the survey is a snapshot in time. While targeting this
goal, we found some unexpected bias in PTS when animals were still and model selection
was used to choose among different candidate models for the detection function (the
model describing how detectability changes with observer-animal distance). Using a simulation
study, we found that, although PTS estimators are asymptotically unbiased, for
the recommended sample sizes the bias depended on the form of the true detection function.
We then extended the simulation study to include animal movement, and found this
led to further bias in CPS and PTS. We present novel methods that incorporate animal
movement with constant speed into estimates of abundance. First, in CPS, we present
an analytic expression to correct for the bias given linear movement. When movement
is de ned by a diffusion process, a simulation based approach, modelling the probability
of animal presence in the circular plot, results in less than 3% bias in the abundance
estimates. For PTS we introduce an estimator composed of two linked submodels: the
movement (animals moving linearly) and the detection model. The performance of the
proposed method is assessed via simulation. Despite being biased, the new estimator
yields improved results compared to ignoring animal movement using conventional PTS.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/156122018-01-01T00:00:00ZPrieto González, RocíoEstimating wildlife abundance is fundamental for its effective management and conservation.
A range of methods exist: total counts, plot sampling, distance sampling and
capture-recapture based approaches. Methods have assumptions and their failure can
lead to substantial bias. Current research in the field is focused not on establishing new
methods but in extending existing methods to deal with their assumptions' violation.
This thesis focus on incorporating animal movement into circular plot sampling (CPS)
and point transect sampling (PTS), where a key assumption is that animals do not move
while within detection range, i.e., the survey is a snapshot in time. While targeting this
goal, we found some unexpected bias in PTS when animals were still and model selection
was used to choose among different candidate models for the detection function (the
model describing how detectability changes with observer-animal distance). Using a simulation
study, we found that, although PTS estimators are asymptotically unbiased, for
the recommended sample sizes the bias depended on the form of the true detection function.
We then extended the simulation study to include animal movement, and found this
led to further bias in CPS and PTS. We present novel methods that incorporate animal
movement with constant speed into estimates of abundance. First, in CPS, we present
an analytic expression to correct for the bias given linear movement. When movement
is de ned by a diffusion process, a simulation based approach, modelling the probability
of animal presence in the circular plot, results in less than 3% bias in the abundance
estimates. For PTS we introduce an estimator composed of two linked submodels: the
movement (animals moving linearly) and the detection model. The performance of the
proposed method is assessed via simulation. Despite being biased, the new estimator
yields improved results compared to ignoring animal movement using conventional PTS.A continuous-time formulation for spatial capture-recapture models
http://hdl.handle.net/10023/15596
Spatial capture-recapture (SCR) models are relatively new but have become the
standard approach used to estimate animal density from capture-recapture data. It
has in the past been impractical to obtain sufficient data for analysis on species that
are very difficult to capture such as elusive carnivores that occur at low density and
range very widely. Advances in technology have led to alternative ways to virtually
“capture" individuals without having to physically hold them. Some examples of
these new non-invasive sampling methods include scat or hair collection for genetic
analysis, acoustic detection and camera trapping.
In traditional capture-recapture (CR) and SCR studies populations are sampled
at discrete points in time leading to clear and well defined occasions whereas the
new detector types mentioned above sample populations continuously in time. Researchers
with data collected continuously currently need to define an appropriate
occasion and aggregate their data accordingly thereby imposing an artificial construct
on their data for analytical convenience.
This research develops a continuous-time (CT) framework for SCR models by
treating detections as a temporal non homogeneous Poisson process (NHPP) and
replacing the usual SCR detection function with a continuous detection hazard function.
The general CT likelihood is first developed for data from passive (also called
“proximity") detectors like camera traps that do not physically hold individuals. The
likelihood is then modified to produce a likelihood for single-catch traps (traps that
are taken out of action by capturing an animal) that has proven difficult to develop
with a discrete-occasion approach.
The lack of a suitable single-catch trap likelihood has led to researchers using
a discrete-time (DT) multi-catch trap estimator to analyse single-catch trap data.
Previous work has found the DT multi-catch estimator to be robust despite the fact
that it is known to be based on the wrong model for single-catch traps (it assumes
that the traps continue operating after catching an individual). Simulation studies in
this work confirm that the multi-catch estimator is robust for estimating density when
density is constant or does not vary much in space. However, there are scenarios with
non-constant density surfaces when the multi-catch estimator is not able to correctly
identify regions of high density. Furthermore, the multi-catch estimator is known
to be negatively biased for the intercept parameter of SCR detection functions and
there may be interest in the detection function in its own right. On the other hand
the CT single-catch estimator is unbiased or nearly so for all parameters of interest
including those in the detection function and those in the model for density.
When one assumes that the detection hazard is constant through time there is
no impact of ignoring capture times and using only the detection frequencies. This
is of course a special case and in reality detection hazards will tend to vary in time.
However when one assumes that the effects of time and distance in the time-varying
hazard are independent, then similarly there is no information in the capture times
about density and detection function parameters. The work here uses a detection
hazard that assumes independence between time and distance. Different forms for
the detection hazard are explored with the most flexible choice being that of a cyclic
regression spline.
Extensive simulation studies suggest as expected that a DT proximity estimator is
unbiased for the estimation of density even when the detection hazard varies though
time. However there are indirect benefits of incorporating capture times because
doing so will lead to a better fitting detection component of the model, and this can
prevent unexplained variation being erroneously attributed to the wrong covariate.
The analysis of two real datasets supports this assertion because the models with the
best fitting detection hazard have different effects to the other models. In addition,
modelling the detection process in continuous-time leads to a more parsimonious
approach compared to using DT models when the detection hazard varies in time.
The underlying process is occurring in continuous-time and so using CT models
allows inferences to be drawn about the underlying process, for example the timevarying
detection hazard can be viewed as a proxy for animal activity. The CT
formulation is able to model the underlying detection hazard accurately and provides
a formal modelling framework to explore different hypotheses about activity patterns.
There is scope to integrate the CT models developed here with models for space usage
and landscape connectivity to explore these processes on a finer temporal scale.
SCR models are experiencing a rapid growth in both application and method
development. The data generating process occurs in CT and hence a CT modelling
approach is a natural fit and opens up several opportunities that are not possible
with a DT formulation. The work here makes a contribution by developing and
exploring the utility of such a CT SCR formulation.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/155962017-01-01T00:00:00ZDistiller, GregSpatial capture-recapture (SCR) models are relatively new but have become the
standard approach used to estimate animal density from capture-recapture data. It
has in the past been impractical to obtain sufficient data for analysis on species that
are very difficult to capture such as elusive carnivores that occur at low density and
range very widely. Advances in technology have led to alternative ways to virtually
“capture" individuals without having to physically hold them. Some examples of
these new non-invasive sampling methods include scat or hair collection for genetic
analysis, acoustic detection and camera trapping.
In traditional capture-recapture (CR) and SCR studies populations are sampled
at discrete points in time leading to clear and well defined occasions whereas the
new detector types mentioned above sample populations continuously in time. Researchers
with data collected continuously currently need to define an appropriate
occasion and aggregate their data accordingly thereby imposing an artificial construct
on their data for analytical convenience.
This research develops a continuous-time (CT) framework for SCR models by
treating detections as a temporal non homogeneous Poisson process (NHPP) and
replacing the usual SCR detection function with a continuous detection hazard function.
The general CT likelihood is first developed for data from passive (also called
“proximity") detectors like camera traps that do not physically hold individuals. The
likelihood is then modified to produce a likelihood for single-catch traps (traps that
are taken out of action by capturing an animal) that has proven difficult to develop
with a discrete-occasion approach.
The lack of a suitable single-catch trap likelihood has led to researchers using
a discrete-time (DT) multi-catch trap estimator to analyse single-catch trap data.
Previous work has found the DT multi-catch estimator to be robust despite the fact
that it is known to be based on the wrong model for single-catch traps (it assumes
that the traps continue operating after catching an individual). Simulation studies in
this work confirm that the multi-catch estimator is robust for estimating density when
density is constant or does not vary much in space. However, there are scenarios with
non-constant density surfaces when the multi-catch estimator is not able to correctly
identify regions of high density. Furthermore, the multi-catch estimator is known
to be negatively biased for the intercept parameter of SCR detection functions and
there may be interest in the detection function in its own right. On the other hand
the CT single-catch estimator is unbiased or nearly so for all parameters of interest
including those in the detection function and those in the model for density.
When one assumes that the detection hazard is constant through time there is
no impact of ignoring capture times and using only the detection frequencies. This
is of course a special case and in reality detection hazards will tend to vary in time.
However when one assumes that the effects of time and distance in the time-varying
hazard are independent, then similarly there is no information in the capture times
about density and detection function parameters. The work here uses a detection
hazard that assumes independence between time and distance. Different forms for
the detection hazard are explored with the most flexible choice being that of a cyclic
regression spline.
Extensive simulation studies suggest as expected that a DT proximity estimator is
unbiased for the estimation of density even when the detection hazard varies though
time. However there are indirect benefits of incorporating capture times because
doing so will lead to a better fitting detection component of the model, and this can
prevent unexplained variation being erroneously attributed to the wrong covariate.
The analysis of two real datasets supports this assertion because the models with the
best fitting detection hazard have different effects to the other models. In addition,
modelling the detection process in continuous-time leads to a more parsimonious
approach compared to using DT models when the detection hazard varies in time.
The underlying process is occurring in continuous-time and so using CT models
allows inferences to be drawn about the underlying process, for example the timevarying
detection hazard can be viewed as a proxy for animal activity. The CT
formulation is able to model the underlying detection hazard accurately and provides
a formal modelling framework to explore different hypotheses about activity patterns.
There is scope to integrate the CT models developed here with models for space usage
and landscape connectivity to explore these processes on a finer temporal scale.
SCR models are experiencing a rapid growth in both application and method
development. The data generating process occurs in CT and hence a CT modelling
approach is a natural fit and opens up several opportunities that are not possible
with a DT formulation. The work here makes a contribution by developing and
exploring the utility of such a CT SCR formulation.Statistical issues in first-in-human studies on BIA 10-2474: neglected comparison of protocol against practice
http://hdl.handle.net/10023/12740
By setting the regulatory-approved protocol for a suite of first-in-human studies on BIA 10-2474 against the subsequent French investigations, we highlight six key design and statistical issues which reinforce recommendations by a Royal Statistical Society Working Party which were made in the aftermath of cytokine release storm in six healthy volunteers in the UK in 2006. The 6 issues are dose determination, availability of pharmacokinetic results, dosing interval, stopping rules, appraisal by safety committee, and clear algorithm required if combining approvals for single and multiple ascending dose studies.
Funding information: European Union's FP7 programme, Grant/Award Number: 602552
Wed, 15 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/127402017-03-15T00:00:00ZBird, Sheila M.Bailey, Rosemary A.Grieve, Andrew P.Senn, StephenBy setting the regulatory-approved protocol for a suite of first-in-human studies on BIA 10-2474 against the subsequent French investigations, we highlight six key design and statistical issues which reinforce recommendations by a Royal Statistical Society Working Party which were made in the aftermath of cytokine release storm in six healthy volunteers in the UK in 2006. The 6 issues are dose determination, availability of pharmacokinetic results, dosing interval, stopping rules, appraisal by safety committee, and clear algorithm required if combining approvals for single and multiple ascending dose studies.Sesqui-arrays, a generalisation of triple arrays
http://hdl.handle.net/10023/12725
A triple array is a rectangular array containing letters, each letter occurring equally often with no repeats in rows or columns, such that the number of letters common to two rows, two columns, or a row and a column are (possibly different) non-zero constants. Deleting the condition on the letters commonto a row and a column gives a double array. We propose the term sesqui-array for such an array when only the condition on pairs ofcolumns is deleted. Thus all triple arrays are sesqui-arrays.In this paper we give three constructions for sesqui-arrays. The first gives (n+1) x n2 arrays on n(n+1) letters for n>1. (Such an array for n=2 was found by Bagchi.) This construction uses Latin squares.The second uses the Sylvester graph, a subgraph of the Hoffman--Singleton graph, to build a good block design for 36 treatments in 42 blocks of size 6, and then uses this in a 7 x 36 sesqui-array for 42 letters. We also give a construction for K x (K-1)(K-2)/2 sesqui-arrays onK(K-1)/2 letters. This construction uses biplanes. It starts with a block of a biplane and produces an array which satisfies the requirements for a sesqui-array except possibly that of having no repeated letters in a row or column. We show that this condition holds if and only if the Hussain chains for the selected block contain no 4-cycles. A sufficient condition for the construction to give a triple array is that each Hussain chain is a union of 3-cycles; but this condition is not necessary, and we give a few further examples. We also discuss the question of which of these arrays provide good designs for experiments.
Fri, 01 Jun 2018 00:00:00 GMThttp://hdl.handle.net/10023/127252018-06-01T00:00:00ZBailey, Rosemary AnneCameron, Peter JephsonNilson, TomasA triple array is a rectangular array containing letters, each letter occurring equally often with no repeats in rows or columns, such that the number of letters common to two rows, two columns, or a row and a column are (possibly different) non-zero constants. Deleting the condition on the letters commonto a row and a column gives a double array. We propose the term sesqui-array for such an array when only the condition on pairs ofcolumns is deleted. Thus all triple arrays are sesqui-arrays.In this paper we give three constructions for sesqui-arrays. The first gives (n+1) x n2 arrays on n(n+1) letters for n>1. (Such an array for n=2 was found by Bagchi.) This construction uses Latin squares.The second uses the Sylvester graph, a subgraph of the Hoffman--Singleton graph, to build a good block design for 36 treatments in 42 blocks of size 6, and then uses this in a 7 x 36 sesqui-array for 42 letters. We also give a construction for K x (K-1)(K-2)/2 sesqui-arrays onK(K-1)/2 letters. This construction uses biplanes. It starts with a block of a biplane and produces an array which satisfies the requirements for a sesqui-array except possibly that of having no repeated letters in a row or column. We show that this condition holds if and only if the Hussain chains for the selected block contain no 4-cycles. A sufficient condition for the construction to give a triple array is that each Hussain chain is a union of 3-cycles; but this condition is not necessary, and we give a few further examples. We also discuss the question of which of these arrays provide good designs for experiments.Telomerecat : a ploidy-agnostic method for estimating telomere length from whole genome sequencing data
http://hdl.handle.net/10023/12591
Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.
Funding: Cancer Research UK Programme Grant to Simon Tavaré (C14303/A17197) (JHRF, AGL, MLS); European Commission through the Horizon 2020 project SOUND (Grant Agreement no. 633974) (AGL).
Mon, 22 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/125912018-01-22T00:00:00ZFarmery, JamesSmith, MikeNIHR BioResource - Rare DiseasesLynch, AndyTelomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.Modelling the spatial dynamics of non-state terrorism : world study, 2002-2013
http://hdl.handle.net/10023/12067
To this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.
Thu, 07 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/120672017-12-07T00:00:00ZPython, AndréTo this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.Modelling complex dependencies inherent in spatial and spatio-temporal point pattern data
http://hdl.handle.net/10023/12009
Point processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations.
This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation.
Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/120092017-06-23T00:00:00ZJones-Todd, Charlotte MPoint processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations.
This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation.
Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.Correlation estimation using components of Japanese candlesticks
http://hdl.handle.net/10023/11901
Using the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/119012016-01-01T00:00:00ZPopov, Valentin MinaUsing the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.Appraising the relevance of DNA copy number loss and gain in prostate cancer using whole genome DNA sequence data
http://hdl.handle.net/10023/11812
A variety of models have been proposed to explain regions of recurrent somatic copy number alteration (SCNA) in human cancer. Our study employs Whole Genome DNA Sequence (WGS) data from tumor samples (n = 103) to comprehensively assess the role of the Knudson two hit genetic model in SCNA generation in prostate cancer. 64 recurrent regions of loss and gain were detected, of which 28 were novel, including regions of loss with more than 15% frequency at Chr4p15.2-p15.1 (15.53%), Chr6q27 (16.50%) and Chr18q12.3 (17.48%). Comprehensive mutation screens of genes, lincRNA encoding sequences, control regions and conserved domains within SCNAs demonstrated that a two-hit genetic model was supported in only a minor proportion of recurrent SCNA losses examined (15/40). We found that recurrent breakpoints and regions of inversion often occur within Knudson model SCNAs, leading to the identification of ZNF292 as a target gene for the deletion at 6q14.3-q15 and NKX3.1 as a two-hit target at 8p21.3-p21.2. The importance of alterations of lincRNA sequences was illustrated by the identification of a novel mutational hotspot at the KCCAT42, FENDRR, CAT1886 and STCAT2 loci at the 16q23.1-q24.3 loss. Our data confirm that the burden of SCNAs is predictive of biochemical recurrence, define nine individual regions that are associated with relapse, and highlight the possible importance of ion channel and G-protein coupled-receptor (GPCR) pathways in cancer development. We concluded that a two-hit genetic model accounts for about one third of SCNA indicating that mechanisms, such haploinsufficiency and epigenetic inactivation, account for the remaining SCNA losses.
We acknowledge support from Cancer Research UK (C5047/A22530, C309/A11566, C368/A6743, A368/A7990, C14303/A17197) and the Dallaglio Foundation. We also acknowledge support from the National Institute of Health Research (NIHR) (The Biomedical Research Centre at The Institute of Cancer Research & The Royal Marsden NHS Foundation Trust and the project "Prostate Cancer: Mechanisms of Progression and Treatment (PROMPT)" [G0500966/75466]). We thank the Wellcome Trust, Bob Champion Cancer Trust, The Orchid Cancer appeal, The RoseTrees Trust, The North West Cancer Research Fund, Big C, The King family, and The Masonic Charitable Foundation for funding. This research is supported by the Francis Crick Institute which receives its core funding from Cancer Research UK (FC001202), the UK Medical Research Council (FC001202), and the Wellcome Trust (FC001202).
Mon, 25 Sep 2017 00:00:00 GMThttp://hdl.handle.net/10023/118122017-09-25T00:00:00ZCamacho, NiedzicaVan Loo, PeterEdwards, SandraKay, Jonathan D.Matthews, LucyHaase, KerstinClark, JeremyDennis, NeningThomas, SarahKremeyer, BarbaraZamora, JorgeButler, Adam P.Gundem, GunesMerson, SueLuxton, HayleyHawkins, SteveGhori, MohammedMarsden, LukeLambert, AdamKaraszi, KatalinPelvender, GillMassie, Charlie E.Kote-Jarai, ZSofiaRaine, KeiranJones, DavidHowat, William J.Hazell, StevenLivni, NaomiFisher, CyrilOgden, ChristopherKumar, PardeepThompson, AlanNicol, DavidMayer, ErikDudderidge, TimYu, YongweiZhang, HongweiShah, Nimish C.Gnanapragasam, Vincent J.Group, The CRUK-ICGC ProstateIsaacs, WilliamVisakorpi, TapioHamdy, FreddieBerney, DanVerrill, ClareWarren, Anne Y.Wedge, David C.Lynch, Andrew G.Foster, Christopher S.Lu, Yong JieBova, G. StevenWhitaker, Hayley C.McDermott, UltanNeal, David E.Eeles, RosalindCooper, Colin S.Brewer, Daniel S.A variety of models have been proposed to explain regions of recurrent somatic copy number alteration (SCNA) in human cancer. Our study employs Whole Genome DNA Sequence (WGS) data from tumor samples (n = 103) to comprehensively assess the role of the Knudson two hit genetic model in SCNA generation in prostate cancer. 64 recurrent regions of loss and gain were detected, of which 28 were novel, including regions of loss with more than 15% frequency at Chr4p15.2-p15.1 (15.53%), Chr6q27 (16.50%) and Chr18q12.3 (17.48%). Comprehensive mutation screens of genes, lincRNA encoding sequences, control regions and conserved domains within SCNAs demonstrated that a two-hit genetic model was supported in only a minor proportion of recurrent SCNA losses examined (15/40). We found that recurrent breakpoints and regions of inversion often occur within Knudson model SCNAs, leading to the identification of ZNF292 as a target gene for the deletion at 6q14.3-q15 and NKX3.1 as a two-hit target at 8p21.3-p21.2. The importance of alterations of lincRNA sequences was illustrated by the identification of a novel mutational hotspot at the KCCAT42, FENDRR, CAT1886 and STCAT2 loci at the 16q23.1-q24.3 loss. Our data confirm that the burden of SCNAs is predictive of biochemical recurrence, define nine individual regions that are associated with relapse, and highlight the possible importance of ion channel and G-protein coupled-receptor (GPCR) pathways in cancer development. We concluded that a two-hit genetic model accounts for about one third of SCNA indicating that mechanisms, such haploinsufficiency and epigenetic inactivation, account for the remaining SCNA losses.Title redacted
http://hdl.handle.net/10023/11739
Thu, 07 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/117392017-12-07T00:00:00ZSharifi Far, ServehSpatio-temporal variation in click production rates of beaked whales : implications for passive acoustic density estimation
http://hdl.handle.net/10023/11712
Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.
T.A.M. was funded under Grant No. N000141010382 from the Office of Naval Research (LATTE project) and thanks support by CEAUL (funded by FCT - Fundação para a Ciência e a Tecnologia, Portugal, through the project UID/MAT/00006/2013). M.P.J. was funded by a Marie Curie Career Integration Grant and M.P.J. and P.L.T. were funded by MASTS (The Marine Alliance for Science and Technology for Scotland, a research pooling initiative funded by the Scottish Funding Council under grant HR09011 and contributing institutions). L.S.H. thanks the BRS Bahamas team that helped collect the Bahamas data, and A. Bocconcelli. D.H. and L.T. were funded by the Office of Naval Research (Award No. N00014-14-1-0394). N.A.S. was funded by an EU-Horizon 2020 Marie Slodowska Curie fellowship (project ECOSOUND). DTAG data in the Canary Islands were collected with funds from the U.S. Office of Naval Research and Fundación Biodiversidad (EU project LIFE INDEMARES) with permit from the Canary Islands and Spanish governments.
Wed, 01 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/117122017-03-01T00:00:00ZWarren, Victoria E.Marques, Tiago A.Harris, DanielleThomas, LenTyack, Peter L.Aguilar de Soto, NatachaHickmott, Leigh S.Johnson, Mark P.Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.Inference from randomized (factorial) experiments
http://hdl.handle.net/10023/11606
This is a contribution to the discussion of the interesting paper by Ding [Statist. Sci. 32 (2017) 331–345], which contrasts approaches attributed to Neyman and Fisher. I believe that Fisher’s usual assumption was unit-treatment additivity, rather than the “sharp null hypothesis” attributed to him. Fisher also developed the notion of interaction in factorial experiments. His explanation leads directly to the concept of marginality, which is essential for the interpretation of data from any factorial experiment.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/116062017-01-01T00:00:00ZBailey, Rosemary AnneThis is a contribution to the discussion of the interesting paper by Ding [Statist. Sci. 32 (2017) 331–345], which contrasts approaches attributed to Neyman and Fisher. I believe that Fisher’s usual assumption was unit-treatment additivity, rather than the “sharp null hypothesis” attributed to him. Fisher also developed the notion of interaction in factorial experiments. His explanation leads directly to the concept of marginality, which is essential for the interpretation of data from any factorial experiment.Measuring temporal trends in biodiversity
http://hdl.handle.net/10023/11534
In 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. In order to assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that were prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising for example from climate change.
Yuan was part-funded by EPSRC/NERC Grant EP/1000917/1 and Marcon by ANR-10-LABX-25-01.
Sun, 01 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/115342017-10-01T00:00:00ZBuckland, S. T.Yuan, Y.Marcon, EricIn 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. In order to assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that were prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising for example from climate change.Authentication and characterisation of a new oesophageal adenocarcinoma cell line : MFD-1
http://hdl.handle.net/10023/11487
New biological tools are required to understand the functional significance of genetic events revealed by whole genome sequencing (WGS) studies in oesophageal adenocarcinoma (OAC). The MFD-1 cell line was isolated from a 55-year-old male with OAC without recombinant-DNA transformation. Somatic genetic variations from MFD-1, tumour, normal oesophagus, and leucocytes were analysed with SNP6. WGS was performed in tumour and leucocytes. RNAseq was performed in MFD-1, and two classic OAC cell lines FLO1 and OE33. Transposase-accessible chromatin sequencing (ATAC-seq) was performed in MFD-1, OE33, and non-neoplastic HET1A cells. Functional studies were performed. MFD-1 had a high SNP genotype concordance with matched germline/tumour. Parental tumour and MFD-1 carried four somatically acquired mutations in three recurrent mutated genes in OAC: TP53, ABCB1 and SEMA5A, not present in FLO-1 or OE33. MFD-1 displayed high expression of epithelial and glandular markers and a unique fingerprint of open chromatin. MFD-1 was tumorigenic in SCID mouse and proliferative and invasive in 3D cultures. The clinical utility of whole genome sequencing projects will be delivered using accurate model systems to develop molecular-phenotype therapeutics. We have described the first such system to arise from the oesophageal International Cancer Genome Consortium project.
Wed, 07 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/114872016-09-07T00:00:00ZGarcia, EdwinHayden, AnnetteBirts, CharlesBritton, EdwardCowie, AndrewPickard, KarenMellone, MassimilianoChoh, ClarisaDerouet, MathieuDuriez, PatrickNoble, FergusWhite, Michael J.Primrose, John N.Strefford, Jonathan C.Rose-Zerilli, MatthewThomas, Gareth J.Ang, YengSharrocks, Andrew D.Fitzgerald, Rebecca C.Underwood, Timothy J.Lynch, Andy G.New biological tools are required to understand the functional significance of genetic events revealed by whole genome sequencing (WGS) studies in oesophageal adenocarcinoma (OAC). The MFD-1 cell line was isolated from a 55-year-old male with OAC without recombinant-DNA transformation. Somatic genetic variations from MFD-1, tumour, normal oesophagus, and leucocytes were analysed with SNP6. WGS was performed in tumour and leucocytes. RNAseq was performed in MFD-1, and two classic OAC cell lines FLO1 and OE33. Transposase-accessible chromatin sequencing (ATAC-seq) was performed in MFD-1, OE33, and non-neoplastic HET1A cells. Functional studies were performed. MFD-1 had a high SNP genotype concordance with matched germline/tumour. Parental tumour and MFD-1 carried four somatically acquired mutations in three recurrent mutated genes in OAC: TP53, ABCB1 and SEMA5A, not present in FLO-1 or OE33. MFD-1 displayed high expression of epithelial and glandular markers and a unique fingerprint of open chromatin. MFD-1 was tumorigenic in SCID mouse and proliferative and invasive in 3D cultures. The clinical utility of whole genome sequencing projects will be delivered using accurate model systems to develop molecular-phenotype therapeutics. We have described the first such system to arise from the oesophageal International Cancer Genome Consortium project.Cholinekinase alpha as an androgen receptor chaperone and prostate cancer therapeutic target
http://hdl.handle.net/10023/11481
Background: The androgen receptor (AR) is a major drug target in prostate cancer (PCa). We profiled the AR-regulated kinome to identify clinically relevant and druggable effectors of AR signaling. Methods: Using genome-wide approaches, we interrogated all AR regulated kinases. Among these, choline kinase alpha (CHKA) expression was evaluated in benign (n = 195), prostatic intraepithelial neoplasia (PIN) (n = 153) and prostate cancer (PCa) lesions (n = 359). We interrogated how CHKA regulates AR signaling using biochemical assays and investigated androgen regulation of CHKA expression in men with PCa, both untreated (n = 20) and treated with an androgen biosynthesis inhibitor degarelix (n = 27). We studied the effect of CHKA inhibition on the PCa transcriptome using RNA sequencing and tested the effect of CHKA inhibition on cell growth, clonogenic survival and invasion. Tumor xenografts (n = 6 per group) were generated in mice using genetically engineered prostate cancer cells with inducible CHKA knockdown. Data were analyzed with χ 2 tests, Cox regression analysis, and Kaplan-Meier methods. All statistical tests were two-sided. Results: CHKA expression was shown to be androgen regulated in cell lines, xenografts, and human tissue (log fold change from 6.75 to 6.59, P = .002) and was positively associated with tumor stage. CHKA binds directly to the ligand-binding domain (LBD) of AR, enhancing its stability. As such, CHKA is the first kinase identified as an AR chaperone. Inhibition of CHKA repressed the AR transcriptional program including pathways enriched for regulation of protein folding, decreased AR protein levels, and inhibited the growth of PCa cell lines, human PCa explants, and tumor xenografts. Conclusions:CHKA can act as an AR chaperone, providing, to our knowledge, the first evidence for kinases as molecular chaperones, making CHKA both a marker of tumor progression and a potential therapeutic target for PCa.
The RNA-seq data generated during this work has been submitted to Gene Expression Omnibus and is available for viewing at the following link http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token = ytazouoixxedjal&acc=GSE63700.
Sun, 01 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/114812016-05-01T00:00:00ZAsim, MohammadMassie, Charles E.Orafidiya, FolakePértega-Gomes, NelmaWarren, Anne Y.Esmaeili, MohsenSelth, Luke A.Zecchini, Heather I.Luko, KatarinaQureshi, ArhamBaridi, AjoebMenon, SurajMadhu, BasettiEscriu, CarlosLyons, ScottVowler, Sarah L.Zecchini, Vincent R.Shaw, GregHessenkemper, WiebkeRussell, RoslinMohammed, HishamStefanos, NikiLynch, Andy G.Grigorenko, ElenaD’Santos, CliveTaylor, ChrisLamb, AlastairSriranjan, RouchelleYang, JialiStark, RoryDehm, Scott M.Rennie, Paul S.Carroll, Jason S.Griffiths, John R.Tavaré, SimonMills, Ian G.McEwan, Iain J.Baniahmad, AriaTilley, Wayne D.Neal, David E.Background: The androgen receptor (AR) is a major drug target in prostate cancer (PCa). We profiled the AR-regulated kinome to identify clinically relevant and druggable effectors of AR signaling. Methods: Using genome-wide approaches, we interrogated all AR regulated kinases. Among these, choline kinase alpha (CHKA) expression was evaluated in benign (n = 195), prostatic intraepithelial neoplasia (PIN) (n = 153) and prostate cancer (PCa) lesions (n = 359). We interrogated how CHKA regulates AR signaling using biochemical assays and investigated androgen regulation of CHKA expression in men with PCa, both untreated (n = 20) and treated with an androgen biosynthesis inhibitor degarelix (n = 27). We studied the effect of CHKA inhibition on the PCa transcriptome using RNA sequencing and tested the effect of CHKA inhibition on cell growth, clonogenic survival and invasion. Tumor xenografts (n = 6 per group) were generated in mice using genetically engineered prostate cancer cells with inducible CHKA knockdown. Data were analyzed with χ 2 tests, Cox regression analysis, and Kaplan-Meier methods. All statistical tests were two-sided. Results: CHKA expression was shown to be androgen regulated in cell lines, xenografts, and human tissue (log fold change from 6.75 to 6.59, P = .002) and was positively associated with tumor stage. CHKA binds directly to the ligand-binding domain (LBD) of AR, enhancing its stability. As such, CHKA is the first kinase identified as an AR chaperone. Inhibition of CHKA repressed the AR transcriptional program including pathways enriched for regulation of protein folding, decreased AR protein levels, and inhibited the growth of PCa cell lines, human PCa explants, and tumor xenografts. Conclusions:CHKA can act as an AR chaperone, providing, to our knowledge, the first evidence for kinases as molecular chaperones, making CHKA both a marker of tumor progression and a potential therapeutic target for PCa.Whole-genome sequencing of nine esophageal adenocarcinoma cell lines
http://hdl.handle.net/10023/11480
Esophageal adenocarcinoma (EAC) is highly mutated and molecularly heterogeneous. The number of cell lines available for study is limited and their genome has been only partially characterized. The availability of an accurate annotation of their mutational landscape is crucial for accurate experimental design and correct interpretation of genotype-phenotype findings. We performed high coverage, paired end whole genome sequencing on eight EAC cell lines-ESO26, ESO51, FLO-1, JH-EsoAd1, OACM5.1 C, OACP4 C, OE33, SK-GT-4-all verified against original patient material, and one esophageal high grade dysplasia cell line, CP-D. We have made available the aligned sequence data and report single nucleotide variants (SNVs), small insertions and deletions (indels), and copy number alterations, identified by comparison with the human reference genome and known single nucleotide polymorphisms (SNPs). We compare these putative mutations to mutations found in primary tissue EAC samples, to inform the use of these cell lines as a model of EAC.
This work was funded by an MRC Programme Grant to R.C.F. and a Cancer Research UK grant to PAWE. The pipeline for mutation calling is funded by Cancer Research UK as part of the International Cancer Genome Consortium. G.C. is a National Institute for Health Research Lecturer as part of a NIHR professorship grant to R.C.F. AGL is supported by a Cancer Research UK programme grant (C14303/A20406) to Simon Tavaré and the European Commission through the Horizon 2020 project SOUND (Grant Agreement no. 633974).
Fri, 10 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/114802016-06-10T00:00:00ZContino, GianmarcoEldridge, Matthew D.Secrier, MariaBower, LawrenceElliott, Rachael FelsWeaver, JamieLynch, Andy G.Edwards, Paul A.W.Fitzgerald, Rebecca C.Esophageal adenocarcinoma (EAC) is highly mutated and molecularly heterogeneous. The number of cell lines available for study is limited and their genome has been only partially characterized. The availability of an accurate annotation of their mutational landscape is crucial for accurate experimental design and correct interpretation of genotype-phenotype findings. We performed high coverage, paired end whole genome sequencing on eight EAC cell lines-ESO26, ESO51, FLO-1, JH-EsoAd1, OACM5.1 C, OACP4 C, OE33, SK-GT-4-all verified against original patient material, and one esophageal high grade dysplasia cell line, CP-D. We have made available the aligned sequence data and report single nucleotide variants (SNVs), small insertions and deletions (indels), and copy number alterations, identified by comparison with the human reference genome and known single nucleotide polymorphisms (SNPs). We compare these putative mutations to mutations found in primary tissue EAC samples, to inform the use of these cell lines as a model of EAC.Decomposition of mutational context signatures using quadratic programming methods
http://hdl.handle.net/10023/11479
Methods for inferring signatures of mutational contexts from large cancer sequencing data sets are invaluable for biological research, but impractical for clinical application where we require tools that decompose the context data for an individual into signatures. One such method has recently been published using an iterative linear modelling approach. A natural alternative places the problem within a quadratic programming framework and is presented here, where it is seen to offer advantages of speed and accuracy.
AGL was supported in this work by a Cancer Research UK programme grant [C14303/A20406] to Simon Tavaré. AGL acknowledges the support of the University of Cambridge, Cancer Research UK and Hutchison Whampoa Limited. Whole-genome sequencing of oesophageal adenocarcinoma was part of the oesophageal International Cancer Genome Consortium (ICGC) project. The oesophageal ICGC project was funded through a programme and infrastructure grant to Rebecca Fitzgerald as part of the OCCAMS collaboration.
Tue, 07 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/114792016-06-07T00:00:00ZLynch, Andy G.Methods for inferring signatures of mutational contexts from large cancer sequencing data sets are invaluable for biological research, but impractical for clinical application where we require tools that decompose the context data for an individual into signatures. One such method has recently been published using an iterative linear modelling approach. A natural alternative places the problem within a quadratic programming framework and is presented here, where it is seen to offer advantages of speed and accuracy.A tumor DNA complex aberration index is an independent predictor of survival in breast and ovarian cancer
http://hdl.handle.net/10023/11478
Complex focal chromosomal rearrangements in cancer genomes, also called "firestorms", can be scored from DNA copy number data. The complex arm-wise aberration index (CAAI) is a score that captures DNA copy number alterations that appear as focal complex events in tumors, and has potential prognostic value in breast cancer. This study aimed to validate this DNA-based prognostic index in breast cancer and test for the first time its potential prognostic value in ovarian cancer. Copy number alteration (CNA) data from 1950 breast carcinomas (METABRIC cohort) and 508 high-grade serous ovarian carcinomas (TCGA dataset) were analyzed. Cases were classified as CAAI positive if at least one complex focal event was scored. Complex alterations were frequently localized on chromosome 8p (n = 159), 17q (n = 176) and 11q (n = 251). CAAI events on 11q were most frequent in estrogen receptor positive (ER+) cases and on 17q in estrogen receptor negative (ER) cases. We found only a modest correlation between CAAI and the overall rate of genomic instability (GII) and number of breakpoints (r = 0.27 and r = 0.42, p <0.001). Breast cancer specific survival (BCSS), overall survival (OS) and ovarian cancer progression free survival (PUS) were used as clinical end points in Cox proportional hazard model survival analyses. CAAI positive breast cancers (43%) had higher mortality: hazard ratio (HR) of 1.94 (95%CI, 1.62-2.32) for BCSS, and of 1.49 (95%CI, 1.30-1.71) for OS. Representations of the 70-gene and the 21-gene predictors were compared with CAAI in multivariable models and CAAI was independently significant with a Cox adjusted HR of 1.56 (95%CI, 1.23-1.99) for ER+ and 1.55 (95%CI, 1.11-2.18) for ER disease. None of the expression-based predictors were prognostic in the ER subset. We found that a model including CAM and the two expression-based prognostic signatures outperformed a model including the 21-gene and 70-gene signatures but excluding CAAL Inclusion of CAAI in the clinical prognostication tool PREDICT significantly improved its performance. CAAI positive ovarian cancers (52%) also had worse prognosis: HRs of 1.3 (95%CI, 1.1-1.7) for PFS and 1.3 (95%CI, 1.1-1.6) for OS. This study validates CAM as an independent predictor of survival in both ER+ and ER breast cancer and reveals a significant prognostic value for CAAI in high-grade serous ovarian cancer. (C) 2014 The Authors. Published by Elsevier B.V. on behalf of Federation of European Biochemical Societies. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/114782015-01-01T00:00:00ZVollan, Hans Kristian MoenRueda, Oscar M.Chin, Suet-FeungCurtis, ChristinaTurashuili, GulisaShah, SohrabLingjaerde, Ole ChristianYuan, YinyinNg, Charlotte K.Dunning, Mark J.Dicks, EdProvenzano, ElenaSammut, StephenMcKinney, StevenEllis, Ian O.Pinder, SarahPurushotham, ArnieMurphy, Leigh C.Kristensen, Vessela N.Brenton, James D.Pharoah, Paul D. P.Borresen-Dale, Anne-LiseAparicio, SamuelCaldas, CarlosLynch, AndyComplex focal chromosomal rearrangements in cancer genomes, also called "firestorms", can be scored from DNA copy number data. The complex arm-wise aberration index (CAAI) is a score that captures DNA copy number alterations that appear as focal complex events in tumors, and has potential prognostic value in breast cancer. This study aimed to validate this DNA-based prognostic index in breast cancer and test for the first time its potential prognostic value in ovarian cancer. Copy number alteration (CNA) data from 1950 breast carcinomas (METABRIC cohort) and 508 high-grade serous ovarian carcinomas (TCGA dataset) were analyzed. Cases were classified as CAAI positive if at least one complex focal event was scored. Complex alterations were frequently localized on chromosome 8p (n = 159), 17q (n = 176) and 11q (n = 251). CAAI events on 11q were most frequent in estrogen receptor positive (ER+) cases and on 17q in estrogen receptor negative (ER) cases. We found only a modest correlation between CAAI and the overall rate of genomic instability (GII) and number of breakpoints (r = 0.27 and r = 0.42, p <0.001). Breast cancer specific survival (BCSS), overall survival (OS) and ovarian cancer progression free survival (PUS) were used as clinical end points in Cox proportional hazard model survival analyses. CAAI positive breast cancers (43%) had higher mortality: hazard ratio (HR) of 1.94 (95%CI, 1.62-2.32) for BCSS, and of 1.49 (95%CI, 1.30-1.71) for OS. Representations of the 70-gene and the 21-gene predictors were compared with CAAI in multivariable models and CAAI was independently significant with a Cox adjusted HR of 1.56 (95%CI, 1.23-1.99) for ER+ and 1.55 (95%CI, 1.11-2.18) for ER disease. None of the expression-based predictors were prognostic in the ER subset. We found that a model including CAM and the two expression-based prognostic signatures outperformed a model including the 21-gene and 70-gene signatures but excluding CAAL Inclusion of CAAI in the clinical prognostication tool PREDICT significantly improved its performance. CAAI positive ovarian cancers (52%) also had worse prognosis: HRs of 1.3 (95%CI, 1.1-1.7) for PFS and 1.3 (95%CI, 1.1-1.6) for OS. This study validates CAM as an independent predictor of survival in both ER+ and ER breast cancer and reveals a significant prognostic value for CAAI in high-grade serous ovarian cancer. (C) 2014 The Authors. Published by Elsevier B.V. on behalf of Federation of European Biochemical Societies. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).Phenotype specific analyses reveal distinct regulatory mechanism for chronically activated p53
http://hdl.handle.net/10023/11475
The downstream functions of the DNA binding tumor suppressor p53 vary depending on the cellular context, and persistent p53 activation has recently been implicated in tumor suppression and senescence. However, genome-wide information about p53-target gene regulation has been derived mostly from acute genotoxic conditions. Using ChIP-seq and expression data, we have found distinct p53 binding profiles between acutely activated (through DNA damage) and chronically activated (in senescent or pro-apoptotic conditions) p53. Compared to the classical ‘acute’ p53 binding profile, ‘chronic’ p53 peaks were closely associated with CpG-islands. Furthermore, the chronic CpG-island binding of p53 conferred distinct expression patterns between senescent and pro-apoptotic conditions. Using the p53 targets seen in the chronic conditions together with external high-throughput datasets, we have built p53 networks that revealed extensive self-regulatory ‘p53 hubs’ where p53 and many p53 targets can physically interact with each other. Integrating these results with public clinical datasets identified the cancer-associated lipogenic enzyme, SCD, which we found to be directly repressed by p53 through the CpG-island promoter, providing a mechanistic link between p53 and the ‘lipogenic phenotype’, a hallmark of cancer. Our data reveal distinct phenotype associations of chronic p53 targets that underlie specific gene regulatory mechanisms.
This work was supported by the University of Cambridge; Cancer Research UK (C14303/A17197); Hutchison Whampoa. In addition, MasasN and TO were supported by the Human Frontier Science Program (RGY0078/2010); HK was supported by MEXT KAKENHI (Grant Numbers 25116005 and 26291071); KT was supported by the Japan Society for the Promotion of Science (24–8563).
Thu, 19 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/114752015-03-19T00:00:00ZKirschner, KristinaSamarajiwa, Shamith A.Cairns, Jonathan M.Menon, SurajPérez-Mancera, Pedro A.Tomimatsu, KosukeBermejo-Rodriguez, CaminoIto, YokoChandra, TamirNarita, MasakoLyons, Scott K.Lynch, Andy G.Kimura, HiroshiOhbayashi, TetsuyaTavaré, SimonNarita, MasashiThe downstream functions of the DNA binding tumor suppressor p53 vary depending on the cellular context, and persistent p53 activation has recently been implicated in tumor suppression and senescence. However, genome-wide information about p53-target gene regulation has been derived mostly from acute genotoxic conditions. Using ChIP-seq and expression data, we have found distinct p53 binding profiles between acutely activated (through DNA damage) and chronically activated (in senescent or pro-apoptotic conditions) p53. Compared to the classical ‘acute’ p53 binding profile, ‘chronic’ p53 peaks were closely associated with CpG-islands. Furthermore, the chronic CpG-island binding of p53 conferred distinct expression patterns between senescent and pro-apoptotic conditions. Using the p53 targets seen in the chronic conditions together with external high-throughput datasets, we have built p53 networks that revealed extensive self-regulatory ‘p53 hubs’ where p53 and many p53 targets can physically interact with each other. Integrating these results with public clinical datasets identified the cancer-associated lipogenic enzyme, SCD, which we found to be directly repressed by p53 through the CpG-island promoter, providing a mechanistic link between p53 and the ‘lipogenic phenotype’, a hallmark of cancer. Our data reveal distinct phenotype associations of chronic p53 targets that underlie specific gene regulatory mechanisms.Frequent somatic transfer of mitochondrial DNA into the nuclear genome of human cancer cells
http://hdl.handle.net/10023/11474
Mitochondrial genomes are separated from the nuclear genome for most of the cell cycle by the nuclear double membrane, intervening cytoplasm, and the mitochondrial double membrane. Despite these physical barriers, we show that somatically acquired mitochondrial-nuclear genome fusion sequences are present in cancer cells. Most occur in conjunction with intranuclear genomic rearrangements, and the features of the fusion fragments indicate that nonhomologous end joining and/or replication-dependent DNA double-strand break repair are the dominant mechanisms involved. Remarkably, mitochondrial-nuclear genome fusions occur at a similar rate per base pair of DNA as interchromosomal nuclear rearrangements, indicating the presence of a high frequency of contact between mitochondrial and nuclear DNA in some somatic cells. Transmission of mitochondrial DNA to the nuclear genome occurs in neoplastically transformed cells, but we do not exclude the possibility that some mitochondrial-nuclear DNA fusions observed in cancer occurred years earlier in normal somatic cells.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/114742015-06-01T00:00:00ZJu, Young SeokTubio, Jose M.C.Mifsud, WilliamFu, BeiyuanDavies, Helen R.Ramakrishna, ManasaLi, YilongYates, LucyGundem, GunesTarpey, Patrick S.Behjati, SamPapaemmanuil, ElliMartin, SanchaFullam, AnthonyGerstung, MoritzNangalia, JyotiGreen, Anthony R.Caldas, CarlosBorg, ÅkeTutt, AndrewMichael Lee, Ming TaVan'T Veer, Laura J.Tan, Benita K.T.Aparicio, SamuelSpan, Paul N.Martens, John W.M.Knappskog, StianVincent-Salomon, AnneBørresen-Dale, Anne LiseEyfjörd, Jórunn ErlaFlanagan, Adrienne M.Foster, ChristopherNeal, David E.Cooper, ColinEeles, RosalindLakhani, Sunil R.Desmedt, ChristineThomas, GillesRichardson, Andrea L.Purdie, Colin A.Thompson, Alastair M.McDermott, UltanYang, FengtangNik-Zainal, SerenaCampbell, Peter J.Stratton, Michael R.Lynch, AndyMitochondrial genomes are separated from the nuclear genome for most of the cell cycle by the nuclear double membrane, intervening cytoplasm, and the mitochondrial double membrane. Despite these physical barriers, we show that somatically acquired mitochondrial-nuclear genome fusion sequences are present in cancer cells. Most occur in conjunction with intranuclear genomic rearrangements, and the features of the fusion fragments indicate that nonhomologous end joining and/or replication-dependent DNA double-strand break repair are the dominant mechanisms involved. Remarkably, mitochondrial-nuclear genome fusions occur at a similar rate per base pair of DNA as interchromosomal nuclear rearrangements, indicating the presence of a high frequency of contact between mitochondrial and nuclear DNA in some somatic cells. Transmission of mitochondrial DNA to the nuclear genome occurs in neoplastically transformed cells, but we do not exclude the possibility that some mitochondrial-nuclear DNA fusions observed in cancer occurred years earlier in normal somatic cells.Mobile element insertions are frequent in oesophageal adenocarcinomas and can mislead paired-end sequencing analysis
http://hdl.handle.net/10023/11473
Background: Mobile elements are active in the human genome, both in the germline and cancers, where they can mutate driver genes. Results: While analysing whole genome paired-end sequencing of oesophageal adenocarcinomas to find genomic rearrangements, we identified three ways in which new mobile element insertions appear in the data, resembling translocation or insertion junctions: inserts where unique sequence has been transduced by an L1 (Long interspersed element 1) mobile element; novel inserts that are confidently, but often incorrectly, mapped by alignment software to L1s or polyA tracts in the reference sequence; and a combination of these two ways, where different sequences within one insert are mapped to different loci. We identified nine unique sequences that were transduced by neighbouring L1s, both L1s in the reference genome and L1s not present in the reference. Many of the resulting inserts were small fragments that include little or no recognisable mobile element sequence. We found 6 loci in the reference genome to which sequence reads from inserts were frequently mapped, probably erroneously, by alignment software: these were either L1 sequence or particularly long polyA runs. Inserts identified from such apparent rearrangement junctions averaged 16 inserts/tumour, range 0-153 insertions in 43 tumours. However, many inserts would not be detected by mapping the sequences to the reference genome, because they do not include sufficient mappable sequence. To estimate total somatic inserts we searched for polyA sequences that were not present in the matched normal or other normals from the same tumour batch, and were not associated with known polymorphisms. Samples of these candidate inserts were verified by sequencing across them or manual inspection of surrounding reads: at least 85 % were somatic and resembled L1-mediated events, most including L1Hs sequence. Approximately 100 such inserts were detected per tumour on average (range zero to approximately 700). Conclusions: Somatic mobile elements insertions are abundant in these tumours, with over 75 % of cases having a number of novel inserts detected. The inserts create a variety of problems for the interpretation of paired-end sequencing data.
Funding was primarily from Cancer Research UK program grants to RCF and ST (C14478/A15874 and C14303/A17197), with additional support awarded to RCF from UK Medical Research Council, NHS National Institute for Health Research (NIHR), the Experimental Cancer Medicine Centre Network and the NIHR Cambridge Biomedical Research Centre, and Cancer Research UK Project grant C1023/A14545 to PAWE. JMJW was funded by a Wellcome Trust Translational Medicine and Therapeutics grant.
Fri, 10 Jul 2015 00:00:00 GMThttp://hdl.handle.net/10023/114732015-07-10T00:00:00ZPaterson, Anna L.Weaver, Jamie M. J.Eldridge, Matthew D.Tavare, SimonFitzgerald, Rebecca C.Edwards, Paul A. W.Lynch, AndyBackground: Mobile elements are active in the human genome, both in the germline and cancers, where they can mutate driver genes. Results: While analysing whole genome paired-end sequencing of oesophageal adenocarcinomas to find genomic rearrangements, we identified three ways in which new mobile element insertions appear in the data, resembling translocation or insertion junctions: inserts where unique sequence has been transduced by an L1 (Long interspersed element 1) mobile element; novel inserts that are confidently, but often incorrectly, mapped by alignment software to L1s or polyA tracts in the reference sequence; and a combination of these two ways, where different sequences within one insert are mapped to different loci. We identified nine unique sequences that were transduced by neighbouring L1s, both L1s in the reference genome and L1s not present in the reference. Many of the resulting inserts were small fragments that include little or no recognisable mobile element sequence. We found 6 loci in the reference genome to which sequence reads from inserts were frequently mapped, probably erroneously, by alignment software: these were either L1 sequence or particularly long polyA runs. Inserts identified from such apparent rearrangement junctions averaged 16 inserts/tumour, range 0-153 insertions in 43 tumours. However, many inserts would not be detected by mapping the sequences to the reference genome, because they do not include sufficient mappable sequence. To estimate total somatic inserts we searched for polyA sequences that were not present in the matched normal or other normals from the same tumour batch, and were not associated with known polymorphisms. Samples of these candidate inserts were verified by sequencing across them or manual inspection of surrounding reads: at least 85 % were somatic and resembled L1-mediated events, most including L1Hs sequence. Approximately 100 such inserts were detected per tumour on average (range zero to approximately 700). Conclusions: Somatic mobile elements insertions are abundant in these tumours, with over 75 % of cases having a number of novel inserts detected. The inserts create a variety of problems for the interpretation of paired-end sequencing data.Mining human prostate cancer datasets : the “camcAPP” shiny app
http://hdl.handle.net/10023/11472
Funding: Core CRUK funding: MD, AGL, ADL. Academy of Medical Sciences Clinical Lecturer Starter Grant SGCL11 (prinicipal funder of this work): ADL.
Wed, 01 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/114722017-03-01T00:00:00ZDunning, Mark J.Vowler, Sarah L.Lalonde, EmilieRoss-Adams, HelenBoutros, PaulMills, Ian G.Lynch, Andy G.Lamb, Alastair D.HES5 silencing is an early and recurrent change in prostate tumourigenesis
http://hdl.handle.net/10023/11471
Prostate cancer is the most common cancer in men, resulting in over 10 000 deaths/year in the UK. Sequencing and copy number analysis of primary tumours has revealed heterogeneity within tumours and an absence of recurrent founder mutations, consistent with non-genetic disease initiating events. Using methylation profiling in a series of multifocal prostate tumours, we identify promoter methylation of the transcription factor HES5 as an early event in prostate tumourigenesis. We confirm that this epigenetic alteration occurs in 86-97% of cases in two independent prostate cancer cohorts (n=49 and n=39 tumour-normal pairs). Treatment of prostate cancer cells with the demethylating agent 5-aza-2′-deoxycytidine increased HES5 expression and downregulated its transcriptional target HES6, consistent with functional silencing of the HES5 gene in prostate cancer. Finally, we identify and test a transcriptional module involving the AR, ERG, HES1 and HES6 and propose a model for the impact of HES5 silencing on tumourigenesis as a starting point for future functional studies.
The ICGC Prostate UK Group is funded by Cancer Research UK Grant C5047/A14835, by the Dallaglio Foundation, and by The Wellcome Trust. The Human Research Tissue Bank is supported by the NIHR Cambridge Biomedical Research Centre.
Wed, 01 Apr 2015 00:00:00 GMThttp://hdl.handle.net/10023/114712015-04-01T00:00:00ZMassie, Charles E.Spiteri, InmaculadaRoss-Adams, HelenLuxton, HayleyKay, JonathanWhitaker, Hayley C.Dunning, Mark J.Lamb, Alastair D.Ramos-Montoya, AntonioBrewer, Daniel S.Cooper, Colin S.Eeles, RosalindWarren, Anne Y.Tavaré, SimonNeal, David E.Lynch, Andy G.UK Prostate ICGC GroupProstate cancer is the most common cancer in men, resulting in over 10 000 deaths/year in the UK. Sequencing and copy number analysis of primary tumours has revealed heterogeneity within tumours and an absence of recurrent founder mutations, consistent with non-genetic disease initiating events. Using methylation profiling in a series of multifocal prostate tumours, we identify promoter methylation of the transcription factor HES5 as an early event in prostate tumourigenesis. We confirm that this epigenetic alteration occurs in 86-97% of cases in two independent prostate cancer cohorts (n=49 and n=39 tumour-normal pairs). Treatment of prostate cancer cells with the demethylating agent 5-aza-2′-deoxycytidine increased HES5 expression and downregulated its transcriptional target HES6, consistent with functional silencing of the HES5 gene in prostate cancer. Finally, we identify and test a transcriptional module involving the AR, ERG, HES1 and HES6 and propose a model for the impact of HES5 silencing on tumourigenesis as a starting point for future functional studies.multiSNV : a probabilistic approach for improving detection of somatic point mutations from multiple related tumour samples
http://hdl.handle.net/10023/11447
Somatic variant analysis of a tumour sample and its matched normal has been widely used in cancer research to distinguish germline polymorphisms from somatic mutations. However, due to the extensive intratumour heterogeneity of cancer, sequencing data from a single tumour sample may greatly underestimate the overall mutational landscape. In recent studies, multiple spatially or temporally separated tumour samples from the same patient were sequenced to identify the regional distribution of somatic mutations and study intratumour heterogeneity. There are a number of tools to perform somatic variant calling from matched tumour-normal next-generation sequencing (NGS) data; however none of these allow joint analysis of multiple same-patient samples. We discuss the benefits and challenges of multisample somatic variant calling and present multiSNV, a software package for calling single nucleotide variants (SNVs) using NGS data from multiple same-patient samples. Instead of performing multiple pairwise analyses of a single tumour sample and a matched normal, multiSNV jointly considers all available samples under a Bayesian framework to increase sensitivity of calling shared SNVs. By leveraging information from all available samples, multiSNV is able to detect rare mutations with variant allele frequencies down to 3% from whole-exome sequencing experiments.
Funding: Cancer Research UK grant C14303/A17197. Funding for open access charge: University of Cambridge.
Tue, 19 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/114472015-05-19T00:00:00ZJosephidou, MalvinaLynch, Andy G.Tavaré, SimonSomatic variant analysis of a tumour sample and its matched normal has been widely used in cancer research to distinguish germline polymorphisms from somatic mutations. However, due to the extensive intratumour heterogeneity of cancer, sequencing data from a single tumour sample may greatly underestimate the overall mutational landscape. In recent studies, multiple spatially or temporally separated tumour samples from the same patient were sequenced to identify the regional distribution of somatic mutations and study intratumour heterogeneity. There are a number of tools to perform somatic variant calling from matched tumour-normal next-generation sequencing (NGS) data; however none of these allow joint analysis of multiple same-patient samples. We discuss the benefits and challenges of multisample somatic variant calling and present multiSNV, a software package for calling single nucleotide variants (SNVs) using NGS data from multiple same-patient samples. Instead of performing multiple pairwise analyses of a single tumour sample and a matched normal, multiSNV jointly considers all available samples under a Bayesian framework to increase sensitivity of calling shared SNVs. By leveraging information from all available samples, multiSNV is able to detect rare mutations with variant allele frequencies down to 3% from whole-exome sequencing experiments.5-hydroxymethylcytosine marks promoters in colon that resist DNA hypermethylation in cancer
http://hdl.handle.net/10023/11446
Background : The discovery of cytosine hydroxymethylation (5hmC) as a mechanism that potentially controls DNA methylation changes typical of neoplasia prompted us to investigate its behaviour in colon cancer. 5hmC is globally reduced in proliferating cells such as colon tumours and the gut crypt progenitors, from which tumours can arise. Results : Here, we show that colorectal tumours and cancer cells express Ten-Eleven-Translocation (TET) transcripts at levels similar to normal tissues. Genome-wide analyses show that promoters marked by 5hmC in normal tissue, and those identified as TET2 targets in colorectal cancer cells, are resistant to methylation gain in cancer. In vitro studies of TET2 in cancer cells confirm that these promoters are resistant to methylation gain independently of sustained TET2 expression. We also find that a considerable number of the methylation gain-resistant promoters marked by 5hmC in normal colon overlap with those that are marked with poised bivalent histone modifications in embryonic stem cells. Conclusions : Together our results indicate that promoters that acquire 5hmC upon normal colon differentiation are innately resistant to neoplastic hypermethylation by mechanisms that do not require high levels of 5hmC in tumours. Our study highlights the potential of cytosine modifications as biomarkers of cancerous cell proliferation.
The authors would like to acknowledge the support of The University of Cambridge, Cancer Research UK (CRUK SEB-Institute Group Award A ref10182; CRUK Senior fellowship C10112/A11388 to AEKI) and Hutchison Whampoa Limited. The Human Research Tissue Bank is supported by the NIHR Cambridge Biomedical Research Centre. FF is a ULB Professor funded by grants from the F.N.R.S. and Télévie, the IUAP P7/03 programme, the ARC (AUWB-2010-2015 ULB-No 7), the WB Health program and the Fonds Gaston Ithier. Data access: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=jpwzvsowiyuamzs&acc=GSE47592
Wed, 01 Apr 2015 00:00:00 GMThttp://hdl.handle.net/10023/114462015-04-01T00:00:00ZUribe-Lewis, SantiagoStark, RoryCarroll, ThomasDunning, Mark J.Bachman, MartinIto, YokoStojic, LovorkaHalim, SilviaVowler, Sarah L.Lynch, Andy G.Delatte, Benjaminde Bony, Eric J.Colin, LaurenceDefrance, MatthieuKrueger, FelixSilva, Ana Luisaten Hoopen, RogierIbrahim, Ashraf E.K.Fuks, FrançoisMurrell, AdeleBackground : The discovery of cytosine hydroxymethylation (5hmC) as a mechanism that potentially controls DNA methylation changes typical of neoplasia prompted us to investigate its behaviour in colon cancer. 5hmC is globally reduced in proliferating cells such as colon tumours and the gut crypt progenitors, from which tumours can arise. Results : Here, we show that colorectal tumours and cancer cells express Ten-Eleven-Translocation (TET) transcripts at levels similar to normal tissues. Genome-wide analyses show that promoters marked by 5hmC in normal tissue, and those identified as TET2 targets in colorectal cancer cells, are resistant to methylation gain in cancer. In vitro studies of TET2 in cancer cells confirm that these promoters are resistant to methylation gain independently of sustained TET2 expression. We also find that a considerable number of the methylation gain-resistant promoters marked by 5hmC in normal colon overlap with those that are marked with poised bivalent histone modifications in embryonic stem cells. Conclusions : Together our results indicate that promoters that acquire 5hmC upon normal colon differentiation are innately resistant to neoplastic hypermethylation by mechanisms that do not require high levels of 5hmC in tumours. Our study highlights the potential of cytosine modifications as biomarkers of cancerous cell proliferation.Epigenetic and oncogenic regulation of SLC16A7 (MCT2) results in protein over-expression, impacting on signalling and cellular phenotypes in prostate cancer
http://hdl.handle.net/10023/11445
Monocarboxylate Transporter 2 (MCT2) is a major pyruvate transporter encoded by the SLC16A7 gene. Recent studies pointed to a consistent overexpression of MCT2 in prostate cancer (PCa) suggesting MCT2 as a putative biomarker and molecular target. Despite the importance of this observation the mechanisms involved in MCT2 regulation are unknown. Through an integrative analysis we have discovered that selective demethylation of an internal SLC16A7/MCT2 promoter is a recurrent event in independent PCa cohorts. This demethylation is associated with expression of isoforms differing only in 5'-UTR translational control motifs, providing one contributing mechanism for MCT2 protein overexpression in PCa. Genes co-expressed with SLC16A7/MCT2 also clustered in oncogenic-related pathways and effectors of these signalling pathways were found to bind at the SLC16A7/MCT2 gene locus. Finally, MCT2 knock-down attenuated the growth of PCa cells. The present study unveils an unexpected epigenetic regulation of SLC16A7/MCT2 isoforms and identifies a link between SLC16A7/MCT2, Androgen Receptor (AR), ETS-related gene (ERG) and other oncogenic pathways in PCa. These results underscore the importance of combining data from epigenetic, transcriptomic and protein level changes to allow more comprehensive insights into the mechanisms underlying protein expression, that in our case provide additional weight to MCT2 as a candidate biomarker and molecular target in PCa.
Felisbino S. received a fellowship from the Sao Paulo Research Foundation (FAPESP) ref. 2013/08830-2 and 2013/06802-1. Anne Y Warren research time funded by: Cambridge Biomedical Research Centre.
Tue, 02 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/114452015-06-02T00:00:00ZPértega-Gomes, NelmaVizcaino, Jose R.Felisbino, SergioWarren, Anne Y.Shaw, GregKay, JonathanWhitaker, HayleyLynch, Andy G.Fryer, LeeNeal, David E.Massie, Charles E.Monocarboxylate Transporter 2 (MCT2) is a major pyruvate transporter encoded by the SLC16A7 gene. Recent studies pointed to a consistent overexpression of MCT2 in prostate cancer (PCa) suggesting MCT2 as a putative biomarker and molecular target. Despite the importance of this observation the mechanisms involved in MCT2 regulation are unknown. Through an integrative analysis we have discovered that selective demethylation of an internal SLC16A7/MCT2 promoter is a recurrent event in independent PCa cohorts. This demethylation is associated with expression of isoforms differing only in 5'-UTR translational control motifs, providing one contributing mechanism for MCT2 protein overexpression in PCa. Genes co-expressed with SLC16A7/MCT2 also clustered in oncogenic-related pathways and effectors of these signalling pathways were found to bind at the SLC16A7/MCT2 gene locus. Finally, MCT2 knock-down attenuated the growth of PCa cells. The present study unveils an unexpected epigenetic regulation of SLC16A7/MCT2 isoforms and identifies a link between SLC16A7/MCT2, Androgen Receptor (AR), ETS-related gene (ERG) and other oncogenic pathways in PCa. These results underscore the importance of combining data from epigenetic, transcriptomic and protein level changes to allow more comprehensive insights into the mechanisms underlying protein expression, that in our case provide additional weight to MCT2 as a candidate biomarker and molecular target in PCa.Integration of copy number and transcriptomics provides risk stratification in prostate cancer : a discovery and validation cohort study
http://hdl.handle.net/10023/11443
Background : Understanding the heterogeneous genotypes and phenotypes of prostate cancer is fundamental to improving the way we treat this disease. As yet, there are no validated descriptions of prostate cancer subgroups derived from integrated genomics linked with clinical outcome. Methods : In a study of 482 tumour, benign and germline samples from 259 men with primary prostate cancer, we used integrative analysis of copy number alterations (CNA) and array transcriptomics to identify genomic loci that affect expression levels of mRNA in an expression quantitative trait loci (eQTL) approach, to stratify patients into subgroups that we then associated with future clinical behaviour, and compared with either CNA or transcriptomics alone. Findings : We identified five separate patient subgroups with distinct genomic alterations and expression profiles based on 100 discriminating genes in our separate discovery and validation sets of 125 and 103 men. These subgroups were able to consistently predict biochemical relapse (p = 0.0017 and p = 0.016 respectively) and were further validated in a third cohort with long-term follow-up (p = 0.027). We show the relative contributions of gene expression and copy number data on phenotype, and demonstrate the improved power gained from integrative analyses. We confirm alterations in six genes previously associated with prostate cancer ( MAP3K7, MELK, RCBTB2, ELAC2, TPD52, ZBTB4), and also identify 94 genes not previously linked to prostate cancer progression that would not have been detected using either transcript or copy number data alone. We confirm a number of previously published molecular changes associated with high risk disease, including MYC amplification, and NKX3-1, RB1 and PTEN deletions, as well as over-expression of PCA3 and AMACR, and loss of MSMB in tumour tissue. A subset of the 100 genes outperforms established clinical predictors of poor prognosis (PSA, Gleason score), as well as previously published gene signatures (p = 0.0001). We further show how our molecular profiles can be used for the early detection of aggressive cases in a clinical setting, and inform treatment decisions. Interpretation : For the first time in prostate cancer this study demonstrates the importance of integrated genomic analyses incorporating both benign and tumour tissue data in identifying molecular alterations leading to the generation of robust gene sets that are predictive of clinical outcome in independent patient cohorts.
Study data are deposited in NCBI GEO (unique identifier number GSE70770).
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/10023/114432015-09-01T00:00:00ZRoss-Adams, H.Lamb, A. D.Dunning, M. J.Halim, S.Lindberg, J.Massie, C. M.Egevad, L. A.Russell, R.Ramos-Montoya, A.Vowler, S. L.Sharma, N. L.Kay, J.Whitaker, H.Clark, J.Hurst, R.Gnanapragasam, V. J.Shah, N. C.Warren, A. Y.Cooper, C. S.Lynch, A. G.Stark, R.Mills, I. G.Grönberg, H.Neal, D. E.Shaw, GregHori, SatoshiBaridi, AjoebTran, MaxineWadhwa, KaranNelson, AdamPatel, KevalThomas, BenjaminLuxton, HayleyGnanpragasam, VincentDoble, AndrewKastner, ChristofAho, TevitaHaynes, BeverleyPartridge, WendyCromwell, ElizabethSangrasi, AsifBurge, JoGeorge, AnneStearn, SaraCorcoran, MarieCoret, HansleyBasnett, GillianFrancis, InduWhitington, ThomasYuan, YinyinCamCaP Study GroupBackground : Understanding the heterogeneous genotypes and phenotypes of prostate cancer is fundamental to improving the way we treat this disease. As yet, there are no validated descriptions of prostate cancer subgroups derived from integrated genomics linked with clinical outcome. Methods : In a study of 482 tumour, benign and germline samples from 259 men with primary prostate cancer, we used integrative analysis of copy number alterations (CNA) and array transcriptomics to identify genomic loci that affect expression levels of mRNA in an expression quantitative trait loci (eQTL) approach, to stratify patients into subgroups that we then associated with future clinical behaviour, and compared with either CNA or transcriptomics alone. Findings : We identified five separate patient subgroups with distinct genomic alterations and expression profiles based on 100 discriminating genes in our separate discovery and validation sets of 125 and 103 men. These subgroups were able to consistently predict biochemical relapse (p = 0.0017 and p = 0.016 respectively) and were further validated in a third cohort with long-term follow-up (p = 0.027). We show the relative contributions of gene expression and copy number data on phenotype, and demonstrate the improved power gained from integrative analyses. We confirm alterations in six genes previously associated with prostate cancer ( MAP3K7, MELK, RCBTB2, ELAC2, TPD52, ZBTB4), and also identify 94 genes not previously linked to prostate cancer progression that would not have been detected using either transcript or copy number data alone. We confirm a number of previously published molecular changes associated with high risk disease, including MYC amplification, and NKX3-1, RB1 and PTEN deletions, as well as over-expression of PCA3 and AMACR, and loss of MSMB in tumour tissue. A subset of the 100 genes outperforms established clinical predictors of poor prognosis (PSA, Gleason score), as well as previously published gene signatures (p = 0.0001). We further show how our molecular profiles can be used for the early detection of aggressive cases in a clinical setting, and inform treatment decisions. Interpretation : For the first time in prostate cancer this study demonstrates the importance of integrated genomic analyses incorporating both benign and tumour tissue data in identifying molecular alterations leading to the generation of robust gene sets that are predictive of clinical outcome in independent patient cohorts.The importance of DNA methylation in prostate cancer development
http://hdl.handle.net/10023/11435
After briefly reviewing the nature of DNA methylation, its general role in cancer and the tools available to interrogate it, we consider the literature surrounding DNA methylation as relating to prostate cancer. Specific consideration is given to recurrent alterations. A list of frequently reported genes is synthesized from 17 studies that have reported on methylation changes in malignant prostate tissue, and we chart the timing of those changes in the diseases history through amalgamation of several previously published data sets. We also review associations with genetic alterations and hormone signalling, before the practicalities of investigating prostate cancer methylation using cell lines are assessed. We conclude by outlining the interplay between DNA methylation and prostate cancer metabolism and their regulation by androgen receptor, with a specific discussion of the mitochondria and their associations with DNA methylation.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/114352017-02-01T00:00:00ZMassie, Charles E.Mills, Ian G.Lynch, Andy G.After briefly reviewing the nature of DNA methylation, its general role in cancer and the tools available to interrogate it, we consider the literature surrounding DNA methylation as relating to prostate cancer. Specific consideration is given to recurrent alterations. A list of frequently reported genes is synthesized from 17 studies that have reported on methylation changes in malignant prostate tissue, and we chart the timing of those changes in the diseases history through amalgamation of several previously published data sets. We also review associations with genetic alterations and hormone signalling, before the practicalities of investigating prostate cancer methylation using cell lines are assessed. We conclude by outlining the interplay between DNA methylation and prostate cancer metabolism and their regulation by androgen receptor, with a specific discussion of the mitochondria and their associations with DNA methylation.The early effects of rapid androgen deprivation on human prostate cancer
http://hdl.handle.net/10023/11434
The androgen receptor (AR) is the dominant growth factor in prostate cancer (PCa). Therefore, understanding how ARs regulate the human transcriptome is of paramount importance. The early effects of castration on human PCa have not previously been studied 27 patients medically castrated with degarelix 7 d before radical prostatectomy. We used mass spectrometry, immunohistochemistry, and gene expression array (validated by reverse transcription-polymerase chain reaction) to compare resected tumour with matched, controlled, untreated PCa tissue. All patients had levels of serum androgen, with reduced levels of intraprostatic androgen at prostatectomy. We observed differential expression of known androgen-regulated genes (TMPRSS2, KLK3, CAMKK2, FKBP5). We identified 749 genes downregulated and 908 genes upregulated following castration. AR regulation of α-methylacyl-CoA racemase expression and three other genes (FAM129A, RAB27A, and KIAA0101) was confirmed. Upregulation of oestrogen receptor 1 (ESR1) expression was observed in malignant epithelia and was associated with differential expression of ESR1-regulated genes and correlated with proliferation (Ki-67 expression). Patient summary : This first-in-man study defines the rapid gene expression changes taking place in prostate cancer (PCa) following castration. Expression levels of the genes that the androgen receptor regulates are predictive of treatment outcome. Upregulation of oestrogen receptor 1 is a mechanism by which PCa cells may survive despite castration.
The authors thank CRUK; the NIHR; the Academy of Medical Sciences (RG:63397); the National Cancer Research Prostate Cancer: Mechanisms of Progression and Treatment (ProMPT) collaborative (G0500966/75466); Hutchison Whampoa Limited; the Human Research Tissue Bank (Addenbrooke’s Hospital, supported by the NIHR Cambridge BRC); and Cancer Research UK.
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/10023/114342016-08-01T00:00:00ZShaw, Greg L.Whitaker, HayleyCorcoran, MarieDunning, Mark J.Luxton, HayleyKay, JonathanMassie, Charlie E.Miller, Jodi L.Lamb, Alastair D.Ross-Adams, HelenRussell, RoslinNelson, Adam W.Eldridge, Matthew D.Lynch, Andrew G.Ramos-Montoya, AntonioMills, Ian G.Taylor, Angela E.Arlt, WiebkeShah, NimishWarren, Anne Y.Neal, David E.The androgen receptor (AR) is the dominant growth factor in prostate cancer (PCa). Therefore, understanding how ARs regulate the human transcriptome is of paramount importance. The early effects of castration on human PCa have not previously been studied 27 patients medically castrated with degarelix 7 d before radical prostatectomy. We used mass spectrometry, immunohistochemistry, and gene expression array (validated by reverse transcription-polymerase chain reaction) to compare resected tumour with matched, controlled, untreated PCa tissue. All patients had levels of serum androgen, with reduced levels of intraprostatic androgen at prostatectomy. We observed differential expression of known androgen-regulated genes (TMPRSS2, KLK3, CAMKK2, FKBP5). We identified 749 genes downregulated and 908 genes upregulated following castration. AR regulation of α-methylacyl-CoA racemase expression and three other genes (FAM129A, RAB27A, and KIAA0101) was confirmed. Upregulation of oestrogen receptor 1 (ESR1) expression was observed in malignant epithelia and was associated with differential expression of ESR1-regulated genes and correlated with proliferation (Ki-67 expression). Patient summary : This first-in-man study defines the rapid gene expression changes taking place in prostate cancer (PCa) following castration. Expression levels of the genes that the androgen receptor regulates are predictive of treatment outcome. Upregulation of oestrogen receptor 1 is a mechanism by which PCa cells may survive despite castration.New model for estimating glomerular filtration rate in patients with cancer
http://hdl.handle.net/10023/11432
Purpose: The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods: We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 (51Cr) EDTA excretion measurements (51Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results: Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)?adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion: In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care.
T.J. was supported by the Wellcome Trust Translational Medicine and Therapeutics Programme and the University of Cambridge, Department of Oncology (RJAG/076). H.E. was supported by the National Institute of Health Research Cambridge Biomedical Research Centre and the University of Cambridge.
Tue, 01 Aug 2017 00:00:00 GMThttp://hdl.handle.net/10023/114322017-08-01T00:00:00ZJanowitz, TobiasWilliams, Edward H.Marshall, AndreaAinsworth, NicolaThomas, Peter B.Sammut, Stephen J.Shepherd, ScottWhite, JeffMark, Patrick B.Lynch, Andy G.Jodrell, Duncan I.Tavaré, SimonEarl, HelenaPurpose: The glomerular filtration rate (GFR) is essential for carboplatin chemotherapy dosing; however, the best method to estimate GFR in patients with cancer is unknown. We identify the most accurate and least biased method. Methods: We obtained data on age, sex, height, weight, serum creatinine concentrations, and results for GFR from chromium-51 (51Cr) EDTA excretion measurements (51Cr-EDTA GFR) from white patients ≥ 18 years of age with histologically confirmed cancer diagnoses at the Cambridge University Hospital NHS Trust, United Kingdom. We developed a new multivariable linear model for GFR using statistical regression analysis. 51Cr-EDTA GFR was compared with the estimated GFR (eGFR) from seven published models and our new model, using the statistics root-mean-squared-error (RMSE) and median residual and on an internal and external validation data set. We performed a comparison of carboplatin dosing accuracy on the basis of an absolute percentage error > 20%. Results: Between August 2006 and January 2013, data from 2,471 patients were obtained. The new model improved the eGFR accuracy (RMSE, 15.00 mL/min; 95% CI, 14.12 to 16.00 mL/min) compared with all published models. Body surface area (BSA)?adjusted chronic kidney disease epidemiology (CKD-EPI) was the most accurate published model for eGFR (RMSE, 16.30 mL/min; 95% CI, 15.34 to 17.38 mL/min) for the internal validation set. Importantly, the new model reduced the fraction of patients with a carboplatin dose absolute percentage error > 20% to 14.17% in contrast to 18.62% for the BSA-adjusted CKD-EPI and 25.51% for the Cockcroft-Gault formula. The results were externally validated. Conclusion: In a large data set from patients with cancer, BSA-adjusted CKD-EPI is the most accurate published model to predict GFR. The new model improves this estimation and may present a new standard of care.An analysis of pilot whale vocalization activity using hidden Markov models
http://hdl.handle.net/10023/11194
Vocalizations of cetaceans form a key component of their social interactions. Such vocalization activity is driven by the behavioral states of the whales, which are not directly observable, so that latent-state models are natural candidates for modeling empirical data on vocalizations. In this paper, we use hidden Markov models to analyze calling activity of long-finned pilot whales (Globicephala melas) recorded over three years in the Vestfjord basin off Lofoten, Norway. Baseline models are used to motivate the use of three states, while more complex models are fit to study the influence of covariates on the state-switching dynamics. Our analysis demonstrates the potential usefulness of hidden Markov models in concisely yet accurately describing the stochastic patterns found in animal communication data, thereby providing a framework for drawing meaningful biological inference.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/111942017-01-01T00:00:00ZPopov, Valentin MinaLangrock, RolandDe Ruiter, Stacy LynnVisser, FleurVocalizations of cetaceans form a key component of their social interactions. Such vocalization activity is driven by the behavioral states of the whales, which are not directly observable, so that latent-state models are natural candidates for modeling empirical data on vocalizations. In this paper, we use hidden Markov models to analyze calling activity of long-finned pilot whales (Globicephala melas) recorded over three years in the Vestfjord basin off Lofoten, Norway. Baseline models are used to motivate the use of three states, while more complex models are fit to study the influence of covariates on the state-switching dynamics. Our analysis demonstrates the potential usefulness of hidden Markov models in concisely yet accurately describing the stochastic patterns found in animal communication data, thereby providing a framework for drawing meaningful biological inference.On the correspondence from Bayesian log-linear modelling to logistic regression modelling with g-priors
http://hdl.handle.net/10023/10854
Consider a set of categorical variables where at least one of them is binary. The log-linear model that describes the counts in the resulting contingency table implies a specific logistic regression model, with the binary variable as the outcome. Within the Bayesian framework, the g-prior and mixtures of g-priors are commonly assigned to the parameters of a generalized linear model. We prove that assigning a g-prior (or a mixture of g-priors) to the parameters of a certain log-linear model designates a g-prior (or a mixture of g-priors) on the parameters of the corresponding logistic regression. By deriving an asymptotic result, and with numerical illustrations, we demonstrate that when a g-prior is adopted, this correspondence extends to the posterior distribution of the model parameters. Thus, it is valid to translate inferences from fitting a log-linear model to inferences within the logistic regression framework, with regard to the presence of main effects and interaction terms.
Thu, 01 Mar 2018 00:00:00 GMThttp://hdl.handle.net/10023/108542018-03-01T00:00:00ZPapathomas, MichailConsider a set of categorical variables where at least one of them is binary. The log-linear model that describes the counts in the resulting contingency table implies a specific logistic regression model, with the binary variable as the outcome. Within the Bayesian framework, the g-prior and mixtures of g-priors are commonly assigned to the parameters of a generalized linear model. We prove that assigning a g-prior (or a mixture of g-priors) to the parameters of a certain log-linear model designates a g-prior (or a mixture of g-priors) on the parameters of the corresponding logistic regression. By deriving an asymptotic result, and with numerical illustrations, we demonstrate that when a g-prior is adopted, this correspondence extends to the posterior distribution of the model parameters. Thus, it is valid to translate inferences from fitting a log-linear model to inferences within the logistic regression framework, with regard to the presence of main effects and interaction terms.Estimating Key Largo woodrat abundance using spatially explicit capture–recapture and trapping point transects
http://hdl.handle.net/10023/10625
The Key Largo woodrat (Neotoma floridana smalli) is an endangered rodent with a restricted geographic range and small population size. Establishing an efficient monitoring program of its abundance has been problematic; previous trapping designs have not worked well because the species is sparsely distributed. We compared Key Largo woodrat abundance estimates in Key Largo, Florida, USA, obtained using trapping point transects (TPT) and spatially explicit capture–recapture (SECR) based on statistical properties, survey effort, practicality, and cost. Both methods combine aspects of distance sampling with capture–recapture, but TPT relies on radiotracking individuals to estimate detectability and SECR relies on repeat capture information to estimate densities of home ranges. Abundance estimates using TPT in the spring of 2007 and 2008 were 333 woodrats (CV = 0.46) and 696 (CV = 0.43), respectively. Abundance estimates using SECR in the spring, summer, and winter of 2007 were 97 (CV = 0.31), 334 (CV = 0.26), and 433 (CV = 0.20) animals, respectively. Trapping point transects used approximately 960 person-hours and 1,010 trap-nights/season. Spatially explicit capture–recapture used approximately 500 person-hours and 6,468 trap-nights/season. Significant time was saved in the SECR survey by setting large numbers of traps close together, minimizing time walking between traps. Trapping point transects were practical to implement in the field, and valuable auxiliary information on Key Largo woodrat behavior was obtained via radiocollaring. In this particular study, detectability of the woodrat using TPT was very low and consequently the SECR method was more efficient. Both methods require a substantial investment in survey effort to detect any change in abundance because of large uncertainty in estimates.
JMP was funded by Disney's Animal Programs, the US Fish and Wildlife Service and University of St Andrews.
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/106252016-06-01T00:00:00ZPotts, Joanne MarieBuckland, Stephen TerrenceThomas, LenSavage, AnneThe Key Largo woodrat (Neotoma floridana smalli) is an endangered rodent with a restricted geographic range and small population size. Establishing an efficient monitoring program of its abundance has been problematic; previous trapping designs have not worked well because the species is sparsely distributed. We compared Key Largo woodrat abundance estimates in Key Largo, Florida, USA, obtained using trapping point transects (TPT) and spatially explicit capture–recapture (SECR) based on statistical properties, survey effort, practicality, and cost. Both methods combine aspects of distance sampling with capture–recapture, but TPT relies on radiotracking individuals to estimate detectability and SECR relies on repeat capture information to estimate densities of home ranges. Abundance estimates using TPT in the spring of 2007 and 2008 were 333 woodrats (CV = 0.46) and 696 (CV = 0.43), respectively. Abundance estimates using SECR in the spring, summer, and winter of 2007 were 97 (CV = 0.31), 334 (CV = 0.26), and 433 (CV = 0.20) animals, respectively. Trapping point transects used approximately 960 person-hours and 1,010 trap-nights/season. Spatially explicit capture–recapture used approximately 500 person-hours and 6,468 trap-nights/season. Significant time was saved in the SECR survey by setting large numbers of traps close together, minimizing time walking between traps. Trapping point transects were practical to implement in the field, and valuable auxiliary information on Key Largo woodrat behavior was obtained via radiocollaring. In this particular study, detectability of the woodrat using TPT was very low and consequently the SECR method was more efficient. Both methods require a substantial investment in survey effort to detect any change in abundance because of large uncertainty in estimates.Assigning stranded bottlenose dolphins to source stocks using stable isotope ratios following the Deepwater Horizon oil spill
http://hdl.handle.net/10023/10588
The potential for stranded dolphins to serve as a tool for monitoring free-ranging populations would be enhanced if their stocks of origin were known. We used stable isotopes of carbon, nitrogen, and sulfur from skin to assign stranded bottlenose dolphins Tursiops truncatus to different habitats, as a proxy for stocks (demographically independent populations), following the Deepwater Horizon oil spill. Model results from biopsy samples collected from dolphins from known habitats (n = 205) resulted in an 80.5% probability of correct assignment. These results were applied to data from stranded dolphins (n = 217), resulting in predicted assignment probabilities of 0.473, 0.172, and 0.355 to Estuarine, Barrier Island (BI), and Coastal stocks, respectively. Differences were found west and east of the Mississippi River, with more Coastal dolphins stranding in western Louisiana and more Estuarine dolphins stranding in Mississippi. Within the Estuarine East Stock, 2 groups were identified, one predominantly associated with Mississippi and Alabama estuaries and another with western Florida. δ15N values were higher in stranded samples for both Estuarine and BI stocks, potentially indicating nutritional stress. High probabilities of correct assignment of the biopsy samples indicate predictable variation in stable isotopes and fidelity to habitat. The power of δ34S to discriminate habitats relative to salinity was essential. Stable isotopes may provide guidance regarding where additional testing is warranted to confirm demographic independence and aid in determining the source habitat of stranded dolphins, thus increasing the value of biological data collected from stranded individuals.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105882017-01-31T00:00:00ZHohn, A. A.Thomas, L.Carmichael, R. H.Litz, J.Clemons-Chevis, C.Shippee, S. F.Sinclair, C.Smith, S.Speakman, T. R.Tumlin, M. C.Zolman, E. S.The potential for stranded dolphins to serve as a tool for monitoring free-ranging populations would be enhanced if their stocks of origin were known. We used stable isotopes of carbon, nitrogen, and sulfur from skin to assign stranded bottlenose dolphins Tursiops truncatus to different habitats, as a proxy for stocks (demographically independent populations), following the Deepwater Horizon oil spill. Model results from biopsy samples collected from dolphins from known habitats (n = 205) resulted in an 80.5% probability of correct assignment. These results were applied to data from stranded dolphins (n = 217), resulting in predicted assignment probabilities of 0.473, 0.172, and 0.355 to Estuarine, Barrier Island (BI), and Coastal stocks, respectively. Differences were found west and east of the Mississippi River, with more Coastal dolphins stranding in western Louisiana and more Estuarine dolphins stranding in Mississippi. Within the Estuarine East Stock, 2 groups were identified, one predominantly associated with Mississippi and Alabama estuaries and another with western Florida. δ15N values were higher in stranded samples for both Estuarine and BI stocks, potentially indicating nutritional stress. High probabilities of correct assignment of the biopsy samples indicate predictable variation in stable isotopes and fidelity to habitat. The power of δ34S to discriminate habitats relative to salinity was essential. Stable isotopes may provide guidance regarding where additional testing is warranted to confirm demographic independence and aid in determining the source habitat of stranded dolphins, thus increasing the value of biological data collected from stranded individuals.Where were they from? Modelling the source stock of dolphins stranded after the Deepwater Horizon oil spill using genetic and stable isotope data
http://hdl.handle.net/10023/10586
Understanding the source stock of common bottlenose dolphins Tursiops truncatus that stranded in the northern Gulf of Mexico subsequent to the Deepwater Horizon oil spill was essential to accurately quantify injury and apportion individuals to the appropriate stock. The aim of this study, part of the Natural Resource Damage Assessment (NRDA), was to estimate the proportion of the 932 recorded strandings between May 2010 and June 2014 that came from coastal versus bay, sound and estuary (BSE) stocks. Four sources of relevant information were available on overlapping subsets totaling 336 (39%) of the strandings: genetic stock assignment, stable isotope ratios, photo-ID and individual genetic-ID. We developed a hierarchical Bayesian model for combining these sources that weighted each data source for each stranding according to a measure of estimated precision: the effective sample size (ESS). The photo- and genetic-ID data were limited and considered to potentially introduce biases, so these data sources were excluded from analyses used in the NRDA. Estimates were calculated separately in 3 regions: East (of the Mississippi outflow), West (of the Mississippi outflow through Vermilion Bay, Louisiana) and Western Louisiana (west of Vermilion Bay to the Texas-Louisiana border); the estimated proportions of coastal strandings were, respectively 0.215 (95% CI: 0.169-0.263), 0.016 (0.036-0.099) and 0.622 (0.487-0.803). This method represents a general approach for integrating multiple sources of information that have differing uncertainties.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105862017-01-31T00:00:00ZThomas, L.Booth, C. G.Rosel, P. E.Hohn, A.Litz, J.Schwacke, L. H.Understanding the source stock of common bottlenose dolphins Tursiops truncatus that stranded in the northern Gulf of Mexico subsequent to the Deepwater Horizon oil spill was essential to accurately quantify injury and apportion individuals to the appropriate stock. The aim of this study, part of the Natural Resource Damage Assessment (NRDA), was to estimate the proportion of the 932 recorded strandings between May 2010 and June 2014 that came from coastal versus bay, sound and estuary (BSE) stocks. Four sources of relevant information were available on overlapping subsets totaling 336 (39%) of the strandings: genetic stock assignment, stable isotope ratios, photo-ID and individual genetic-ID. We developed a hierarchical Bayesian model for combining these sources that weighted each data source for each stranding according to a measure of estimated precision: the effective sample size (ESS). The photo- and genetic-ID data were limited and considered to potentially introduce biases, so these data sources were excluded from analyses used in the NRDA. Estimates were calculated separately in 3 regions: East (of the Mississippi outflow), West (of the Mississippi outflow through Vermilion Bay, Louisiana) and Western Louisiana (west of Vermilion Bay to the Texas-Louisiana border); the estimated proportions of coastal strandings were, respectively 0.215 (95% CI: 0.169-0.263), 0.016 (0.036-0.099) and 0.622 (0.487-0.803). This method represents a general approach for integrating multiple sources of information that have differing uncertainties.Delphinid echolocation click detection probability on near-seafloor sensors
http://hdl.handle.net/10023/10512
The probability of detecting echolocating delphinids on a near-seafloor sensor was estimated using two Monte Carlo simulation methods. One method estimated the probability of detecting a single click (cue counting); the other estimated the probability of detecting a group of delphinids (group counting). Echolocation click beam pattern and source level assumptions strongly influenced detectability predictions by the cue counting model. Group detectability was also influenced by assumptions about group behaviors. Model results were compared to in situ recordings of encounters with Risso's dolphin (Grampus griseus) and presumed pantropical spotted dolphin (Stenella attenuata) from a near-seafloor four-channel tracking sensor deployed in the Gulf of Mexico (25.537°N 84.632°W, depth 1220 m). Horizontal detection range, received level and estimated source level distributions from localized encounters were compared with the model predictions. Agreement between in situ results and model predictions suggests that simulations can be used to estimate detection probabilities when direct distance estimation is not available.
Funding for HARP data collection and analysis was provided by the Natural Resource Damage Assessment partners (20105138) and the Center for the Integrated Modeling and Analysis of the Gulf Ecosystem (C-IMAGE) Consortium of the BP/Gulf of Mexico Research Initiative (SA 12-10/GoMRI-007). The analyses and opinions expressed are those of the authors and not necessarily those of the funding entities. This research was made possible by a grant from The Gulf of Mexico Research Initiative/C-IMAGE II.
Thu, 01 Sep 2016 00:00:00 GMThttp://hdl.handle.net/10023/105122016-09-01T00:00:00ZFrasier, Kaitlin E.Wiggins, Sean M.Harris, DanielleMarques, Tiago A.Thomas, LenHildebrand, John A.The probability of detecting echolocating delphinids on a near-seafloor sensor was estimated using two Monte Carlo simulation methods. One method estimated the probability of detecting a single click (cue counting); the other estimated the probability of detecting a group of delphinids (group counting). Echolocation click beam pattern and source level assumptions strongly influenced detectability predictions by the cue counting model. Group detectability was also influenced by assumptions about group behaviors. Model results were compared to in situ recordings of encounters with Risso's dolphin (Grampus griseus) and presumed pantropical spotted dolphin (Stenella attenuata) from a near-seafloor four-channel tracking sensor deployed in the Gulf of Mexico (25.537°N 84.632°W, depth 1220 m). Horizontal detection range, received level and estimated source level distributions from localized encounters were compared with the model predictions. Agreement between in situ results and model predictions suggests that simulations can be used to estimate detection probabilities when direct distance estimation is not available.A simulation approach to assessing environmental risk of sound exposure to marine mammals
http://hdl.handle.net/10023/10382
Intense underwater sounds caused by military sonar, seismic surveys, and pile driving can harm acoustically sensitive marine mammals. Many jurisdictions require such activities to undergo marine mammal impact assessments to guide mitigation. However, the ability to assess impacts in a rigorous, quantitative way is hindered by large knowledge gaps concerning hearing ability, sensitivity, and behavioral responses to noise exposure. We describe a simulation-based framework, called SAFESIMM (Statistical Algorithms For Estimating the Sonar Influence on Marine Megafauna), that can be used to calculate the numbers of agents (animals) likely to be affected by intense underwater sounds. We illustrate the simulation framework using two species that are likely to be affected by marine renewable energy developments in UK waters: gray seal (Halichoerus grypus) and harbor porpoise (Phocoena phocoena). We investigate three sources of uncertainty: How sound energy is perceived by agents with differing hearing abilities; how agents move in response to noise (i.e., the strength and directionality of their evasive movements); and the way in which these responses may interact with longer term constraints on agent movement. The estimate of received sound exposure level (SEL) is influenced most strongly by the weighting function used to account for the specie's presumed hearing ability. Strongly directional movement away from the sound source can cause modest reductions (~5 dB) in SEL over the short term (periods of less than 10 days). Beyond 10 days, the way in which agents respond to noise exposure has little or no effect on SEL, unless their movements are constrained by natural boundaries. Most experimental studies of noise impacts have been short-term. However, data are needed on long-term effects because uncertainty about predicted SELs accumulates over time. Synthesis and applications. Simulation frameworks offer a powerful way to explore, understand, and estimate effects of cumulative sound exposure on marine mammals and to quantify associated levels of uncertainty. However, they can often require subjective decisions that have important consequences for management recommendations, and the basis for these decisions must be clearly described.
Sat, 01 Apr 2017 00:00:00 GMThttp://hdl.handle.net/10023/103822017-04-01T00:00:00ZDonovan, Carl R.Harris, Catriona M.Milazzo, LorenzoHarwood, JohnMarshall, LauraWilliams, RobIntense underwater sounds caused by military sonar, seismic surveys, and pile driving can harm acoustically sensitive marine mammals. Many jurisdictions require such activities to undergo marine mammal impact assessments to guide mitigation. However, the ability to assess impacts in a rigorous, quantitative way is hindered by large knowledge gaps concerning hearing ability, sensitivity, and behavioral responses to noise exposure. We describe a simulation-based framework, called SAFESIMM (Statistical Algorithms For Estimating the Sonar Influence on Marine Megafauna), that can be used to calculate the numbers of agents (animals) likely to be affected by intense underwater sounds. We illustrate the simulation framework using two species that are likely to be affected by marine renewable energy developments in UK waters: gray seal (Halichoerus grypus) and harbor porpoise (Phocoena phocoena). We investigate three sources of uncertainty: How sound energy is perceived by agents with differing hearing abilities; how agents move in response to noise (i.e., the strength and directionality of their evasive movements); and the way in which these responses may interact with longer term constraints on agent movement. The estimate of received sound exposure level (SEL) is influenced most strongly by the weighting function used to account for the specie's presumed hearing ability. Strongly directional movement away from the sound source can cause modest reductions (~5 dB) in SEL over the short term (periods of less than 10 days). Beyond 10 days, the way in which agents respond to noise exposure has little or no effect on SEL, unless their movements are constrained by natural boundaries. Most experimental studies of noise impacts have been short-term. However, data are needed on long-term effects because uncertainty about predicted SELs accumulates over time. Synthesis and applications. Simulation frameworks offer a powerful way to explore, understand, and estimate effects of cumulative sound exposure on marine mammals and to quantify associated levels of uncertainty. However, they can often require subjective decisions that have important consequences for management recommendations, and the basis for these decisions must be clearly described.An assessment of the population of cotton-top tamarins (Saguinus oedipus) and their habitat in Colombia
http://hdl.handle.net/10023/10100
Numerous animals have declining populations due to habitat loss, illegal wildlife trade, and climate change. The cotton-top tamarin (Saguinus oedipus) is a Critically Endangered primate species, endemic to northwest Colombia, threatened by deforestation and illegal trade. In order to assess the current state of this species, we analyzed changes in the population of cotton-top tamarins and its habitat from 2005 to 2012. We used a tailor-made "lure strip transect" method to survey 43 accessible forest parcels that represent 30% of the species' range. Estimated population size in the surveyed region was approximately 2,050 in 2005 and 1,900 in 2012, with a coefficient of variation of approximately 10%. The estimated population change between surveys was -7% (a decline of approximately 1.3% per year) suggesting a relatively stable population. If densities of inaccessible forest parcels are similar to those of surveyed samples, the estimated population of cotton-top tamarins in the wild in 2012 was 6,946 individuals. We also recorded little change in the amount of suitable habitat for cotton-top tamarins between sample periods: in 2005, 18% of surveyed forest was preferred habitat for cotton-top tamarins, while in 2012, 17% percent was preferred. We attribute the relatively stable population of this Critically Endangered species to increased conservation efforts of Proyecto Tití, conservation NGOs, and the Colombian government. Due to continued threats to cotton-top tamarins and their habitat such as agriculture and urban expansion, ongoing conservation efforts are needed to ensure the long-term survival of cotton-top tamarins in Colombia.
Wed, 28 Dec 2016 00:00:00 GMThttp://hdl.handle.net/10023/101002016-12-28T00:00:00ZSavage, AnneThomas, LenFeilen, Katie L.Kidney, DarrenSoto, Luis H.Pearson, MackenzieMedina, Felix S.Emeris, GermanGuillen, Rosamira R.Numerous animals have declining populations due to habitat loss, illegal wildlife trade, and climate change. The cotton-top tamarin (Saguinus oedipus) is a Critically Endangered primate species, endemic to northwest Colombia, threatened by deforestation and illegal trade. In order to assess the current state of this species, we analyzed changes in the population of cotton-top tamarins and its habitat from 2005 to 2012. We used a tailor-made "lure strip transect" method to survey 43 accessible forest parcels that represent 30% of the species' range. Estimated population size in the surveyed region was approximately 2,050 in 2005 and 1,900 in 2012, with a coefficient of variation of approximately 10%. The estimated population change between surveys was -7% (a decline of approximately 1.3% per year) suggesting a relatively stable population. If densities of inaccessible forest parcels are similar to those of surveyed samples, the estimated population of cotton-top tamarins in the wild in 2012 was 6,946 individuals. We also recorded little change in the amount of suitable habitat for cotton-top tamarins between sample periods: in 2005, 18% of surveyed forest was preferred habitat for cotton-top tamarins, while in 2012, 17% percent was preferred. We attribute the relatively stable population of this Critically Endangered species to increased conservation efforts of Proyecto Tití, conservation NGOs, and the Colombian government. Due to continued threats to cotton-top tamarins and their habitat such as agriculture and urban expansion, ongoing conservation efforts are needed to ensure the long-term survival of cotton-top tamarins in Colombia.Passive acoustic monitoring of the decline of Mexico's critically endangered vaquita
http://hdl.handle.net/10023/9937
The vaquita (Phocoena sinus) is the world's most endangered marine mammal with ≈245 individuals remaining in 2008. This species of porpoise is endemic to the northern Gulf of California, Mexico, and has historically suffered population declines from unsustainable bycatch in gillnets. An illegal gillnet fishery for an endangered fish, the totoaba (Totoaba macdonaldi), has recently resurged throughout the vaquita's range. The secretive but lucrative wildlife trade with China for totoaba swim bladders has probably increased vaquita bycatch mortality, but by an unknown amount. Precise population monitoring by visual surveys is difficult because vaquitas are inherently hard to see and have now become so rare that sighting rates are very low. However, their echolocation clicks can be identified readily on specialized acoustic detectors. Acoustic detections on an array of 46 moored detectors indicate that vaquita acoustic activity declined by 80% between 2011 and 2015 in the central part of the species’ range. Statistical models estimate an annual rate of decline of 34% (95% Bayesian Credible Interval -48% to -21%). Based on preliminary acoustic monitoring results from 2011–2014 the Government of Mexico enacted and is enforcing an emergency 2-year ban of gillnets throughout the species’ range to prevent extinction, at a cost of $74 million USD to compensate fishers. Developing precise acoustic monitoring methods proved critical to exposing the severity of vaquitas’ decline and emphasizes the need for continual monitoring to effectively manage critically endangered species.
Different institutions and agencies have provided funding during the development and implementation of the acoustic monitoring program.
Wed, 01 Feb 2017 00:00:00 GMThttp://hdl.handle.net/10023/99372017-02-01T00:00:00ZJaramillo-Legorreta, ArmandoCardenas-Hinojosa, GustavoNieto-Garcia, EdwynaRojas-Bracho, LorenzoHoef, Jay VerMoore, JeffreyTregenza, NicholasBarlow, JayGerrodette, TimThomas, LenTaylor, BarbaraThe vaquita (Phocoena sinus) is the world's most endangered marine mammal with ≈245 individuals remaining in 2008. This species of porpoise is endemic to the northern Gulf of California, Mexico, and has historically suffered population declines from unsustainable bycatch in gillnets. An illegal gillnet fishery for an endangered fish, the totoaba (Totoaba macdonaldi), has recently resurged throughout the vaquita's range. The secretive but lucrative wildlife trade with China for totoaba swim bladders has probably increased vaquita bycatch mortality, but by an unknown amount. Precise population monitoring by visual surveys is difficult because vaquitas are inherently hard to see and have now become so rare that sighting rates are very low. However, their echolocation clicks can be identified readily on specialized acoustic detectors. Acoustic detections on an array of 46 moored detectors indicate that vaquita acoustic activity declined by 80% between 2011 and 2015 in the central part of the species’ range. Statistical models estimate an annual rate of decline of 34% (95% Bayesian Credible Interval -48% to -21%). Based on preliminary acoustic monitoring results from 2011–2014 the Government of Mexico enacted and is enforcing an emergency 2-year ban of gillnets throughout the species’ range to prevent extinction, at a cost of $74 million USD to compensate fishers. Developing precise acoustic monitoring methods proved critical to exposing the severity of vaquitas’ decline and emphasizes the need for continual monitoring to effectively manage critically endangered species.The challenges of analyzing behavioral response study data : an overview of the MOCHA (Multi-study OCean acoustics Human effects Analysis) project
http://hdl.handle.net/10023/9923
This paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.
Date of Acceptance:
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/99232016-01-01T00:00:00ZHarris, Catriona MThomas, LenSadykova, DinaraDe Ruiter, Stacy LynnTyack, Peter LloydSouthall, Brandon L.Read, Andrew J.Miller, PatrickThis paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.String C-groups as transitive subgroups of Sn
http://hdl.handle.net/10023/9794
If Γ is a string C-group which is isomorphic to a transitive subgroup of the symmetric group Sn (other than Sn and the alternating group An), then the rank of Γ is at most n/2+1, with finitely many exceptions (which are classified). It is conjectured that only the symmetric group has to be excluded.
Mon, 01 Feb 2016 00:00:00 GMThttp://hdl.handle.net/10023/97942016-02-01T00:00:00ZCameron, Peter JephsonFernandes, Maria ElisaLeemans, DimitriMixer, MarkIf Γ is a string C-group which is isomorphic to a transitive subgroup of the symmetric group Sn (other than Sn and the alternating group An), then the rank of Γ is at most n/2+1, with finitely many exceptions (which are classified). It is conjectured that only the symmetric group has to be excluded.Habitat complexity in aquatic microcosms affects processes driven by detrivores
http://hdl.handle.net/10023/9749
Habitat complexity can influence predation rates (e.g. by providing refuge) but other ecosystem processes and species interactions might also be modulated by the properties of habitat structure. Here, we focussed on how complexity of artificial habitat (plastic plants), in microcosms, influenced short-term processes driven by three aquatic detritivores. The effects of habitat complexity on leaf decomposition, production of fine organic matter and pH levels were explored by measuring complexity in three ways: 1. as the presence vs. absence of habitat structure; 2. as the amount of structure (3 or 4.5 g of plastic plants); and 3. as the spatial configuration of structures (measured as fractal dimension). The experiment also addressed potential interactions among the consumers by running all possible species combinations. In the experimental microcosms, habitat complexity influenced how species performed, especially when comparing structure present vs. structure absent. Treatments with structure showed higher fine particulate matter production and lower pH compared to treatments without structures and this was probably due to higher digestion and respiration when structures were present. When we explored the effects of the different complexity levels, we found that the amount of structure added explained more than the fractal dimension of the structures. We give a detailed overview of the experimental design, statistical models and R codes, because our statistical analysis can be applied to other study systems (and disciplines such as restoration ecology). We further make suggestions of how to optimise statistical power when artificially assembling, and analysing, ‘habitat complexity’ by not confounding complexity with the amount of structure added. In summary, this study highlights the importance of habitat complexity for energy flow and the maintenance of ecosystem processes in aquatic ecosystems.
LF was supported in part by the Spanish Ministry of Economy and Competitiveness through the project SCARCE Consolider-Ingenio CSD2009-00065.
Tue, 01 Nov 2016 00:00:00 GMThttp://hdl.handle.net/10023/97492016-11-01T00:00:00ZFlores, LoreaBailey, R. A.Elosegi, ArturoLarrañaga, AitorReiss, JuliaHabitat complexity can influence predation rates (e.g. by providing refuge) but other ecosystem processes and species interactions might also be modulated by the properties of habitat structure. Here, we focussed on how complexity of artificial habitat (plastic plants), in microcosms, influenced short-term processes driven by three aquatic detritivores. The effects of habitat complexity on leaf decomposition, production of fine organic matter and pH levels were explored by measuring complexity in three ways: 1. as the presence vs. absence of habitat structure; 2. as the amount of structure (3 or 4.5 g of plastic plants); and 3. as the spatial configuration of structures (measured as fractal dimension). The experiment also addressed potential interactions among the consumers by running all possible species combinations. In the experimental microcosms, habitat complexity influenced how species performed, especially when comparing structure present vs. structure absent. Treatments with structure showed higher fine particulate matter production and lower pH compared to treatments without structures and this was probably due to higher digestion and respiration when structures were present. When we explored the effects of the different complexity levels, we found that the amount of structure added explained more than the fractal dimension of the structures. We give a detailed overview of the experimental design, statistical models and R codes, because our statistical analysis can be applied to other study systems (and disciplines such as restoration ecology). We further make suggestions of how to optimise statistical power when artificially assembling, and analysing, ‘habitat complexity’ by not confounding complexity with the amount of structure added. In summary, this study highlights the importance of habitat complexity for energy flow and the maintenance of ecosystem processes in aquatic ecosystems.Bayesian multi-species modelling of non-negative continuous ecological data with a discrete mass at zero
http://hdl.handle.net/10023/9626
Severe declines in the number of some songbirds over the last 40 years
have caused heated debate amongst interested parties. Many factors
have been suggested as possible causes for these declines, including
an increase in the abundance and distribution of an avian predator,
the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a
predator effect on the abundance of its prey, we analyse data on 10
species visiting garden bird feeding stations monitored by the British
Trust for Ornithology in relation to the abundance of sparrowhawks.
We apply Bayesian hierarchical models to data relating to averaged
maximum weekly counts from a garden bird monitoring survey. These
data are essentially continuous, bounded below by zero, but for many
species show a marked spike at zero that many standard distributions
would not be able to account for. We use the Tweedie distributions,
which for certain areas of parameter space relate to continuous nonnegative
distributions with a discrete probability mass at zero, and
are hence able to deal with the shape of the empirical distributions of
the data.
The methods developed in this thesis begin by modelling single prey
species independently with an avian predator as a covariate, using
MCMC methods to explore parameter and model spaces. This model
is then extended to a multiple-prey species model, testing for interactions
between species as well as synchrony in their response to environmental
factors and unobserved variation.
Finally we use a relatively new methodological framework, namely
the SPDE approach in the INLA framework, to fit a multi-species
spatio-temporal model to the ecological data.
The results from the analyses are consistent with the hypothesis that
sparrowhawks are suppressing the numbers of some species of birds
visiting garden feeding stations. Only the species most susceptible to
sparrowhawk predation seem to be affected.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/96262015-01-01T00:00:00ZSwallow, BenSevere declines in the number of some songbirds over the last 40 years
have caused heated debate amongst interested parties. Many factors
have been suggested as possible causes for these declines, including
an increase in the abundance and distribution of an avian predator,
the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a
predator effect on the abundance of its prey, we analyse data on 10
species visiting garden bird feeding stations monitored by the British
Trust for Ornithology in relation to the abundance of sparrowhawks.
We apply Bayesian hierarchical models to data relating to averaged
maximum weekly counts from a garden bird monitoring survey. These
data are essentially continuous, bounded below by zero, but for many
species show a marked spike at zero that many standard distributions
would not be able to account for. We use the Tweedie distributions,
which for certain areas of parameter space relate to continuous nonnegative
distributions with a discrete probability mass at zero, and
are hence able to deal with the shape of the empirical distributions of
the data.
The methods developed in this thesis begin by modelling single prey
species independently with an avian predator as a covariate, using
MCMC methods to explore parameter and model spaces. This model
is then extended to a multiple-prey species model, testing for interactions
between species as well as synchrony in their response to environmental
factors and unobserved variation.
Finally we use a relatively new methodological framework, namely
the SPDE approach in the INLA framework, to fit a multi-species
spatio-temporal model to the ecological data.
The results from the analyses are consistent with the hypothesis that
sparrowhawks are suppressing the numbers of some species of birds
visiting garden feeding stations. Only the species most susceptible to
sparrowhawk predation seem to be affected.Effects of a scientific echo sounder on the behavior of short-finned pilot whales (Globicephala macrorhynchus)
http://hdl.handle.net/10023/9555
Active echo sounding devices are often employed for commercial or scientific purposes in the foraging habitats of marine mammals. We conducted an experiment off Cape Hatteras, North Carolina, USA, to assess whether the behavior of short-finned pilot whales (Globicephala macrorhynchus) changed when exposed to an EK60 scientific echo sounder. We attached digital acoustic recording tags (DTAGs) to nine individuals, five of which were exposed. A hidden Markov model to characterize diving states with and without exposure provided no evidence for a change in foraging behavior. However, generalized estimating equations to model changes in heading variance over the entire tag record under all experimental conditions showed a consistent increase in heading variance during exposure over all values of depth and pitch. This suggests that regardless of behavioral state, the whales changed their heading more frequently when the echo sounder was active. This response could represent increased vigilance in which whales maintained awareness of echo sounder location by increasing their heading variance and provides the first quantitative analysis on reactions of cetaceans to a scientific echo sounder.
This work was supported by award RC-2154 from the Strategic Environmental Research and Development Program and funding from the Naval Facilities Engineering Command Atlantic and NOAA Fisheries, Southeast Region.
Mon, 01 May 2017 00:00:00 GMThttp://hdl.handle.net/10023/95552017-05-01T00:00:00ZQuick, NicolaScott-Hayward, LindesaySadykova, DinaraNowacek, DougRead, AndrewActive echo sounding devices are often employed for commercial or scientific purposes in the foraging habitats of marine mammals. We conducted an experiment off Cape Hatteras, North Carolina, USA, to assess whether the behavior of short-finned pilot whales (Globicephala macrorhynchus) changed when exposed to an EK60 scientific echo sounder. We attached digital acoustic recording tags (DTAGs) to nine individuals, five of which were exposed. A hidden Markov model to characterize diving states with and without exposure provided no evidence for a change in foraging behavior. However, generalized estimating equations to model changes in heading variance over the entire tag record under all experimental conditions showed a consistent increase in heading variance during exposure over all values of depth and pitch. This suggests that regardless of behavioral state, the whales changed their heading more frequently when the echo sounder was active. This response could represent increased vigilance in which whales maintained awareness of echo sounder location by increasing their heading variance and provides the first quantitative analysis on reactions of cetaceans to a scientific echo sounder.Modeling the aggregated exposure and responses of bowhead whales Balaena mysticetus to multiple sources of anthropogenic underwater sound
http://hdl.handle.net/10023/9259
Potential responses of marine mammals to anthropogenic underwater sound are usually assessed by researchers and regulators on the basis of exposure to a single, relatively loud sound source. However, marine mammals typically receive sounds from multiple, dynamic sources. We developed a method to aggregate modeled sounds from multiple sources and estimate the sound levels received by individuals. To illustrate the method, we modeled the sound fields of 9 sources associated with oil development and estimated the sound received over 47 d by a population of 10 000 simulated bowhead whales Balaena mysticetus on their annual migration through the Alaskan Beaufort Sea. Empirical data were sufficient to parameterize simulations of the distribution of individual whales over time and their range of movement patterns. We ran 2 simulations to estimate the sound exposure history and distances traveled by bowhead whales: one in which they could change their movement paths (avert) in response to set levels of sound and one in which they could not avert. When animals could not avert, about 2% of the simulated population was exposed to root mean square (rms) sound pressure levels (SPL) ≥ 180 dB re 1 mu Pa, a level that regulators in the U.S. often associate with injury. When animals could avert from sound levels that regulators often associate with behavioral disturbance (rms SPL > 160 dB re 1 μPa), <1% of the simulated population was exposed to levels associated with injury. Nevertheless, many simulated bowhead whales received sound levels considerably above ambient throughout their migration. Our method enables estimates of the aggregated level of sound to which populations are exposed over extensive areas and time periods.
This work was supported in part by a contract between BP Exploration (Alaska) Inc. and the University of California, Santa Barbara (E.F.), and by the North Slope Borough.
Mon, 02 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/92592016-05-02T00:00:00ZEllison, William T.Racca, RobertoClark, Christopher W.Streever, BillFrankel, Adam S.Fleishman, EricaAngliss, RobynBerger, JoelKetten, DarleneGuerra, MelaniaLeu, MatthiasMcKenna, MeganSformo, ToddSouthall, BrandonSuydam, RobertThomas, LenPotential responses of marine mammals to anthropogenic underwater sound are usually assessed by researchers and regulators on the basis of exposure to a single, relatively loud sound source. However, marine mammals typically receive sounds from multiple, dynamic sources. We developed a method to aggregate modeled sounds from multiple sources and estimate the sound levels received by individuals. To illustrate the method, we modeled the sound fields of 9 sources associated with oil development and estimated the sound received over 47 d by a population of 10 000 simulated bowhead whales Balaena mysticetus on their annual migration through the Alaskan Beaufort Sea. Empirical data were sufficient to parameterize simulations of the distribution of individual whales over time and their range of movement patterns. We ran 2 simulations to estimate the sound exposure history and distances traveled by bowhead whales: one in which they could change their movement paths (avert) in response to set levels of sound and one in which they could not avert. When animals could not avert, about 2% of the simulated population was exposed to root mean square (rms) sound pressure levels (SPL) ≥ 180 dB re 1 mu Pa, a level that regulators in the U.S. often associate with injury. When animals could avert from sound levels that regulators often associate with behavioral disturbance (rms SPL > 160 dB re 1 μPa), <1% of the simulated population was exposed to levels associated with injury. Nevertheless, many simulated bowhead whales received sound levels considerably above ambient throughout their migration. Our method enables estimates of the aggregated level of sound to which populations are exposed over extensive areas and time periods.PReMiuM : an R package for profile regression mixture models using Dirichlet processes
http://hdl.handle.net/10023/8931
PReMiuM is a recently developed R package for Bayesian clustering using a Dirichlet process mixture model. This model is an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership (Molitor, Papathomas, Jerrett, and Richardson 2010). The package allows binary, categorical, count and continuous response, as well as continuous and discrete covariates. Additionally, predictions may be made for the response, and missing values for the covariates are handled. Several samplers and label switching moves are implemented along with diagnostic tools to assess convergence. A number of R functions for post-processing of the output are also provided. In addition to tting mixtures, it may additionally be of interest to determine which covariates actively drive the mixture components. This is implemented in the package as variable selection.
Fri, 20 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/89312015-03-20T00:00:00ZLiverani, SilviaHastie, DavidAzizi, LamiaePapathomas, MichailRichardson, SylviaPReMiuM is a recently developed R package for Bayesian clustering using a Dirichlet process mixture model. This model is an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership (Molitor, Papathomas, Jerrett, and Richardson 2010). The package allows binary, categorical, count and continuous response, as well as continuous and discrete covariates. Additionally, predictions may be made for the response, and missing values for the covariates are handled. Several samplers and label switching moves are implemented along with diagnostic tools to assess convergence. A number of R functions for post-processing of the output are also provided. In addition to tting mixtures, it may additionally be of interest to determine which covariates actively drive the mixture components. This is implemented in the package as variable selection.An efficient acoustic density estimation method with human detectors applied to gibbons in Cambodia
http://hdl.handle.net/10023/8842
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.
D. Kidney was supported by an Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Grant studentship (EPSRC grant EP/P505097/1). B. Stevenson was supported by a studentship jointly funded by the University of St Andrews and EPSRC, through the National Centre for Statistical Ecology (EPSRC grant EP/I000917/1).
Thu, 19 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/88422016-05-19T00:00:00ZKidney, DarrenRawson, Benjamin M.Borchers, David LouisStevenson, BenMarques, Tiago A.Thomas, LenSome animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.Gauging allowable harm limits to cumulative, sub-lethal effects of human activities on wildlife : a case-study approach using two whale populations
http://hdl.handle.net/10023/8716
As sublethal human pressures on marine wildlife and their habitats increase and interact in complex ways, there is a pressing need for methods to quantify cumulative impacts of these stressors on populations, and policy decisions about allowable harm limits. Few studies quantify population consequences of individual stressors, and fewer quantify synergistic effects. Incorporating all sources of uncertainty can cause predictions to span the range from negligible to catastrophic. Two places were identified to bound this problem through energetic mechanisms that reduce prey available to individuals. First, the US Marine Mammal Protection Act's Potential Biological Removal (PBR) equation was used as a placeholder allowable harm limit to represent the number of animals that can be removed annually without depleting a population below agreed-upon management targets. That rephrased the research question from, “How big could cumulative impacts be?” to “How big would cumulative impacts have to be to exceed an agreed-upon threshold?” Secondly, two data-rich case studies, namely Gulf of Maine humpback and northeast Pacific resident killer whales, were used as examples to parameterize the weakest link, namely between prey availability and demography. Given no additional information, the model predicted that human activities need only reduce prey available to the killer whale population by ~10% to cause a population-level take, through reduced fecundity and/or survival, equivalent to PBR. By contrast, in the humpback population, reduction in prey availability of ~50% was needed to cause a similar, PBR-sized effect. The paper describes an approach – results are merely illustrative. The two case studies differ in prey specialization, life history, and, no doubt, proximity to carrying capacity. This method of inverting the problem refocuses discussions around what the level of prey depletion – via competition with commercial fisheries, displacement from feeding areas through noise-generating activities, or acoustic masking of signals used to detect prey – would have to occur to exceed allowable harm limits set for lethal takes in fisheries or other, more easily quantifiable, human activities.
Rob Williams was supported by a Marie Curie International Incoming Fellowship within the 7th European Community Framework Programme (Project CONCEAL, FP7, PIIF-GA-2009-253407).
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/10023/87162016-08-01T00:00:00ZWilliams, RobThomas, LenAshe, ErinClark, Christopher W.Hammond, Philip S.As sublethal human pressures on marine wildlife and their habitats increase and interact in complex ways, there is a pressing need for methods to quantify cumulative impacts of these stressors on populations, and policy decisions about allowable harm limits. Few studies quantify population consequences of individual stressors, and fewer quantify synergistic effects. Incorporating all sources of uncertainty can cause predictions to span the range from negligible to catastrophic. Two places were identified to bound this problem through energetic mechanisms that reduce prey available to individuals. First, the US Marine Mammal Protection Act's Potential Biological Removal (PBR) equation was used as a placeholder allowable harm limit to represent the number of animals that can be removed annually without depleting a population below agreed-upon management targets. That rephrased the research question from, “How big could cumulative impacts be?” to “How big would cumulative impacts have to be to exceed an agreed-upon threshold?” Secondly, two data-rich case studies, namely Gulf of Maine humpback and northeast Pacific resident killer whales, were used as examples to parameterize the weakest link, namely between prey availability and demography. Given no additional information, the model predicted that human activities need only reduce prey available to the killer whale population by ~10% to cause a population-level take, through reduced fecundity and/or survival, equivalent to PBR. By contrast, in the humpback population, reduction in prey availability of ~50% was needed to cause a similar, PBR-sized effect. The paper describes an approach – results are merely illustrative. The two case studies differ in prey specialization, life history, and, no doubt, proximity to carrying capacity. This method of inverting the problem refocuses discussions around what the level of prey depletion – via competition with commercial fisheries, displacement from feeding areas through noise-generating activities, or acoustic masking of signals used to detect prey – would have to occur to exceed allowable harm limits set for lethal takes in fisheries or other, more easily quantifiable, human activities.Constructing flag-transitive, point-imprimitive designs
http://hdl.handle.net/10023/8546
We give a construction of a family of designs with a specified point-partition and determine the subgroup of automorphisms leaving invariant the point-partition. We give necessary and sufficient conditions for a design in the family to possess a flag-transitive group of automorphisms preserving the specified point-partition. We give examples of flag-transitive designs in the family, including a new symmetric 2-(1408,336,80) design with automorphism group 2^12:((3⋅M22):2) and a construction of one of the families of the symplectic designs (the designs S^−(n) ) exhibiting a flag-transitive, point-imprimitive automorphism group.
Wed, 04 May 2016 00:00:00 GMThttp://hdl.handle.net/10023/85462016-05-04T00:00:00ZCameron, Peter JephsonPraeger, Cheryl E.We give a construction of a family of designs with a specified point-partition and determine the subgroup of automorphisms leaving invariant the point-partition. We give necessary and sufficient conditions for a design in the family to possess a flag-transitive group of automorphisms preserving the specified point-partition. We give examples of flag-transitive designs in the family, including a new symmetric 2-(1408,336,80) design with automorphism group 2^12:((3⋅M22):2) and a construction of one of the families of the symplectic designs (the designs S^−(n) ) exhibiting a flag-transitive, point-imprimitive automorphism group.Permutation groups and transformation semigroups : results and problems
http://hdl.handle.net/10023/8532
J.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group G and a single non-permutation a. Our particular concern is the influence that properties of G (related to homogeneity, transitivity and primitivity) have on the structure of the semigroup. In the first part of the paper, we consider properties of S=<G,a> such as regularity and generation. The second is a brief report on the synchronization project, which aims to decide in what circumstances S contains an element of rank 1. The paper closes with a list of open problems on permutation groups and linear groups, and some comments about the impact on semigroups are provided. These two research directions outlined above lead to very interesting and challenging problems on primitive permutation groups whose solutions require combining results from several different areas of mathematics, certainly fulfilling both of Howie's elegance and value tests in a new and fascinating way.
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/85322015-10-01T00:00:00ZAraujo, JoaoCameron, Peter JephsonJ.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group G and a single non-permutation a. Our particular concern is the influence that properties of G (related to homogeneity, transitivity and primitivity) have on the structure of the semigroup. In the first part of the paper, we consider properties of S=<G,a> such as regularity and generation. The second is a brief report on the synchronization project, which aims to decide in what circumstances S contains an element of rank 1. The paper closes with a list of open problems on permutation groups and linear groups, and some comments about the impact on semigroups are provided. These two research directions outlined above lead to very interesting and challenging problems on primitive permutation groups whose solutions require combining results from several different areas of mathematics, certainly fulfilling both of Howie's elegance and value tests in a new and fascinating way.Bayesian sequential tests of the initial size of a linear pure death process
http://hdl.handle.net/10023/8286
We provide a recursive algorithm for determining the sampling plans of invariant Bayesian sequential tests of the initial size of a linear pure death process of unknown rate. These tests compare favourably with the corresponding truncated sequential probability ratio tests.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/82862015-05-01T00:00:00ZGoudie, I.B.J.We provide a recursive algorithm for determining the sampling plans of invariant Bayesian sequential tests of the initial size of a linear pure death process of unknown rate. These tests compare favourably with the corresponding truncated sequential probability ratio tests.Using species proportions to quantify turnover in biodiversity
http://hdl.handle.net/10023/8033
Quantifying species turnover is an important aspect of biodiversity monitoring. Turnover measures are usually based on species presence/absence data, reflecting the rate at which species are replaced. However, measures that reflect the rate at which individuals of a species are replaced by individuals of another species are far more sensitive to change. In this paper, we propose families of turnover measures that reflect changes in species proportions. We study the properties of our measures, and use simulation to assess their success in detecting turnover. Using data on the British farmland bird community from the breeding bird survey, we evaluate our measures to quantify temporal turnover and how it varies across the British mainland.
We are very grateful to all the volunteers who have contributed to the BBS. Yuan was funded by EPSRC/NERC grant EP/1000917/1. Harrison was funded by the Scottish Government’s Centre of Expertise ClimateXChange (www.climatexchange.org.uk).
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/80332016-06-01T00:00:00ZYuan, YuanBuckland, Stephen TerrenceHarrison, PhilFoss, SergeyJohnston, AlisonQuantifying species turnover is an important aspect of biodiversity monitoring. Turnover measures are usually based on species presence/absence data, reflecting the rate at which species are replaced. However, measures that reflect the rate at which individuals of a species are replaced by individuals of another species are far more sensitive to change. In this paper, we propose families of turnover measures that reflect changes in species proportions. We study the properties of our measures, and use simulation to assess their success in detecting turnover. Using data on the British farmland bird community from the breeding bird survey, we evaluate our measures to quantify temporal turnover and how it varies across the British mainland.Efficient abstracting of dive profiles using a broken-stick model
http://hdl.handle.net/10023/7972
For diving animals, animal-borne sensors are used to collect time-depth information for studying behaviour, ranging patterns and foraging ecology. Often, this information needs to be compressed for storage or transmission. Widely used devices called conductivity-temperature-depth satellite relay data loggers (CTD-SRDLs) sample time and depth at high resolution during a dive and then abstract the time-depth trajectory using a broken-stick model (BSM). This approximation method can summarize efficiently the curvilinear shape of a dive, using a piecewise linear shape with a small, fixed number of vertices, or break points. We present the process of abstracting dives using the BSM and quantify its performance, by measuring the uncertainty associated with the profiles it produces. We develop a method for obtaining a confidence zone and an index for the goodness-of-fit (dive zone index, DZI) for abstracted dive profiles. We validate our results with a case study using dives from elephant seals (Mirounga spp.). We use generalized additive models (GAMs) to determine whether the DZI can be used as a proxy for an absolute measure of fit and investigate the relationship between the DZI and the dive shape. We found a strong correlation between the residual sum of squares (RSS) for the difference between the detailed and abstracted profiles, and the DZI and maximum residual (R4), for dives resulting from CTD-SRDLs (69% deviance explained). On its own, the DZI explained a lower percentage of deviance which was variable for abstracted dives with different numbers of break points. We also found evidence for systematic differences in the DZI for different dive shapes (65% deviance explained). Although the proportional loss of information in the abstraction of time-depth dive profiles by BSM is high, what remains is sufficient to infer goodness-of-fit of the abstracted profile by reversing the abstraction process. Our results suggest that together the DZI and R4 can be used as a proxy for the RSS, and we present the method for obtaining these metrics for BSM-abstracted profiles.
This work was supported by SMRU Ltd (now SMRU Marine) in the form of a PhD fellowship (T.P.). Completion of the manuscript was supported by a National Research Foundation Scarce Skills Postdoctoral Fellowship at the University of Cape Town, South Africa (T.P.). The CTD-SRDL data presented in this manuscript were collected as part of a project funded by the Natural Environment Research Council (NERC) grants NE/E018289/1 and NER/D/S/2002/00426.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/79722015-03-01T00:00:00ZPhotopoulou, T.Lovell, PhilipFedak, M.A.Thomas, L.Matthiopoulos, J.For diving animals, animal-borne sensors are used to collect time-depth information for studying behaviour, ranging patterns and foraging ecology. Often, this information needs to be compressed for storage or transmission. Widely used devices called conductivity-temperature-depth satellite relay data loggers (CTD-SRDLs) sample time and depth at high resolution during a dive and then abstract the time-depth trajectory using a broken-stick model (BSM). This approximation method can summarize efficiently the curvilinear shape of a dive, using a piecewise linear shape with a small, fixed number of vertices, or break points. We present the process of abstracting dives using the BSM and quantify its performance, by measuring the uncertainty associated with the profiles it produces. We develop a method for obtaining a confidence zone and an index for the goodness-of-fit (dive zone index, DZI) for abstracted dive profiles. We validate our results with a case study using dives from elephant seals (Mirounga spp.). We use generalized additive models (GAMs) to determine whether the DZI can be used as a proxy for an absolute measure of fit and investigate the relationship between the DZI and the dive shape. We found a strong correlation between the residual sum of squares (RSS) for the difference between the detailed and abstracted profiles, and the DZI and maximum residual (R4), for dives resulting from CTD-SRDLs (69% deviance explained). On its own, the DZI explained a lower percentage of deviance which was variable for abstracted dives with different numbers of break points. We also found evidence for systematic differences in the DZI for different dive shapes (65% deviance explained). Although the proportional loss of information in the abstraction of time-depth dive profiles by BSM is high, what remains is sufficient to infer goodness-of-fit of the abstracted profile by reversing the abstraction process. Our results suggest that together the DZI and R4 can be used as a proxy for the RSS, and we present the method for obtaining these metrics for BSM-abstracted profiles.Circular designs balanced for neighbours at distances one and two
http://hdl.handle.net/10023/7454
We define three types of neighbour-balanced designs for experiments where the units are arranged in a circle or single line in space or time. The designs are balanced with respect to neighbours at distance one and at distance two. The variants come from allowing or forbidding self-neighbours, and from considering neighbours to be directed or undirected. For two of the variants, we give a method of constructing a design for all values of the number of treatments, except for some small values where it is impossible. In the third case, we give a partial solution that covers all sizes likely to be used in practice.
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/10023/74542014-12-01T00:00:00ZAldred, R. E. L.Bailey, R. A.Mckay, Brendan D.Wanless, Ian M.We define three types of neighbour-balanced designs for experiments where the units are arranged in a circle or single line in space or time. The designs are balanced with respect to neighbours at distance one and at distance two. The variants come from allowing or forbidding self-neighbours, and from considering neighbours to be directed or undirected. For two of the variants, we give a method of constructing a design for all values of the number of treatments, except for some small values where it is impossible. In the third case, we give a partial solution that covers all sizes likely to be used in practice.Spatial variation in maximum dive depth in gray seals in relation to foraging.
http://hdl.handle.net/10023/6886
Habitat preference maps are a way of representing animals’ space use in two dimensions. For marine animals, the third dimension is an important aspect of spatial ecology. We used dive data from seven gray seals Halichoerus grypus (a primarily benthic forager) collected with GPS phone tags (Sea Mammal Research Unit) to investigate the distribution of the maximum depth visited in each dive. We modeled maximum dive depth as a function of spatiotemporal covariates using a generalized additive mixed model (GAMM) with individual as a random effect. Bathymetry, horizontal displacement, latitude and longitude, Julian day, sediment type, and light conditions accounted for 37% of the variability in the data. Persistent patterns of autocorrelation in the raw data suggest that individual intrinsic rhythm might be an important factor, not captured by external covariates. The strength of using this statistical method to generate spatial predictions of the distribution of maximum dive depth is its applicability to other plunge and pursuit divers. Despite being predictions of a point estimate, these maps provide some insight into the third dimension of habitat use in marine animals. The capacity to predict this aspect of vertical habitat use may help avoid conflict between animal habitat and coastal or offshore developments
Theoni Photopoulou was funded by SMRU Ltd in the form of a Ph.D. studentship, 2008–2012.
Tue, 01 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/68862014-07-01T00:00:00ZPhotopoulou, TheoniFedak, MikeThomas, LenMatthiopoulos, JasonHabitat preference maps are a way of representing animals’ space use in two dimensions. For marine animals, the third dimension is an important aspect of spatial ecology. We used dive data from seven gray seals Halichoerus grypus (a primarily benthic forager) collected with GPS phone tags (Sea Mammal Research Unit) to investigate the distribution of the maximum depth visited in each dive. We modeled maximum dive depth as a function of spatiotemporal covariates using a generalized additive mixed model (GAMM) with individual as a random effect. Bathymetry, horizontal displacement, latitude and longitude, Julian day, sediment type, and light conditions accounted for 37% of the variability in the data. Persistent patterns of autocorrelation in the raw data suggest that individual intrinsic rhythm might be an important factor, not captured by external covariates. The strength of using this statistical method to generate spatial predictions of the distribution of maximum dive depth is its applicability to other plunge and pursuit divers. Despite being predictions of a point estimate, these maps provide some insight into the third dimension of habitat use in marine animals. The capacity to predict this aspect of vertical habitat use may help avoid conflict between animal habitat and coastal or offshore developmentsUsing accelerometers to determine the calling behavior of tagged baleen whales
http://hdl.handle.net/10023/6623
Low-frequency acoustic signals generated by baleen whales can propagate over vast distances, making the assignment of calls to specific individuals problematic. Here, we report the novel use of acoustic recording tags equipped with high-resolution accelerometers to detect vibrations from the surface of two tagged fin whales that directly match the timing of recorded acoustic signals. A tag deployed on a buoy in the vicinity of calling fin whales and a recording from a tag that had just fallen off a whale were able to detect calls acoustically but did not record corresponding accelerometer signals that were measured on calling individuals. Across the hundreds of calls measured on two tagged fin whales, the accelerometer response was generally anisotropic across all three axes, appeared to depend on tag placement and increased with the level of received sound. These data demonstrate that high-sample rate accelerometry can provide important insights into the acoustic behavior of baleen whales that communicate at low frequencies. This method helps identify vocalizing whales, which in turn enables the quantification of call rates, a fundamental component of models used to estimate baleen whale abundance and distribution from passive acoustic monitoring.
Tue, 15 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/66232014-07-15T00:00:00ZGoldbogen, JeremyDe Ruiter, Stacy LynnStimpert, AlisonCalambokidis, JohnFriedlaender, AriSchorr, GregMoretti, DavidTyack, Peter LloydSouthall, BrandonLow-frequency acoustic signals generated by baleen whales can propagate over vast distances, making the assignment of calls to specific individuals problematic. Here, we report the novel use of acoustic recording tags equipped with high-resolution accelerometers to detect vibrations from the surface of two tagged fin whales that directly match the timing of recorded acoustic signals. A tag deployed on a buoy in the vicinity of calling fin whales and a recording from a tag that had just fallen off a whale were able to detect calls acoustically but did not record corresponding accelerometer signals that were measured on calling individuals. Across the hundreds of calls measured on two tagged fin whales, the accelerometer response was generally anisotropic across all three axes, appeared to depend on tag placement and increased with the level of received sound. These data demonstrate that high-sample rate accelerometry can provide important insights into the acoustic behavior of baleen whales that communicate at low frequencies. This method helps identify vocalizing whales, which in turn enables the quantification of call rates, a fundamental component of models used to estimate baleen whale abundance and distribution from passive acoustic monitoring.Nested row-column designs for near-factorial experiments with two treatment factors and one control treatment
http://hdl.handle.net/10023/6556
This paper presents some methods of designing experiments in a block design with nested rows and columns. The treatments consist of all combinations of levels of two treatment factors, with an additional control treatment.
The authors also thank Queen Mary, University of London, the University of St Andrews and the Poznan University of Life Sciences for financial support. The second author was also supported by the British-Polish Young Scientists Programme, grant WAR/342/116.
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/65562015-10-01T00:00:00ZBailey, Rosemary AnneLacka, AgnieszkaThis paper presents some methods of designing experiments in a block design with nested rows and columns. The treatments consist of all combinations of levels of two treatment factors, with an additional control treatment.The effect of animal movement on line transect estimates of abundance
http://hdl.handle.net/10023/6466
Line transect sampling is a distance sampling method for estimating the abundance of wild animal populations. One key assumption of this method is that all animals are detected at their initial location. Animal movement independent of the transect and observer can thus cause substantial bias. We present an analytic expression for this bias when detection within the transect is certain (strip transect sampling) and use simulation to quantify bias when detection falls off with distance from the line (line transect sampling). We also explore the non-linear relationship between bias, detection, and animal movement by varying detectability and movement type. We consider animals that move in randomly orientated straight lines, which provides an upper bound on bias, and animals that are constrained to a home range of random radius. We find that bias is reduced when animal movement is constrained, and bias is considerably smaller in line transect sampling than strip transect sampling provided that mean animal speed is less than observer speed. By contrast, when mean animal speed exceeds observer speed the bias in line transect sampling becomes comparable with, and may exceed, that of strip transect sampling. Bias from independent animal movement is reduced by the observer searching further perpendicular to the transect, searching a shorter distance ahead and by ignoring animals that may overtake the observer from behind. However, when animals move in response to the observer, the standard practice of searching further ahead should continue as the bias from responsive movement is often greater than that from independent movement.
This work was supported by the University of St Andrews (http://www.st-andrews.ac.uk/; RG, STB, LT) and by a summer scholarship and PhD grant from The Carnegie Trust for the Universities of Scotland (http://www.carnegie-trust.org/) to RG.
Mon, 23 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/64662015-03-23T00:00:00ZGlennie, R.Buckland, S.T.Thomas, L.Line transect sampling is a distance sampling method for estimating the abundance of wild animal populations. One key assumption of this method is that all animals are detected at their initial location. Animal movement independent of the transect and observer can thus cause substantial bias. We present an analytic expression for this bias when detection within the transect is certain (strip transect sampling) and use simulation to quantify bias when detection falls off with distance from the line (line transect sampling). We also explore the non-linear relationship between bias, detection, and animal movement by varying detectability and movement type. We consider animals that move in randomly orientated straight lines, which provides an upper bound on bias, and animals that are constrained to a home range of random radius. We find that bias is reduced when animal movement is constrained, and bias is considerably smaller in line transect sampling than strip transect sampling provided that mean animal speed is less than observer speed. By contrast, when mean animal speed exceeds observer speed the bias in line transect sampling becomes comparable with, and may exceed, that of strip transect sampling. Bias from independent animal movement is reduced by the observer searching further perpendicular to the transect, searching a shorter distance ahead and by ignoring animals that may overtake the observer from behind. However, when animals move in response to the observer, the standard practice of searching further ahead should continue as the bias from responsive movement is often greater than that from independent movement.Mixture models for distance sampling detection functions
http://hdl.handle.net/10023/6463
We present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used “key function plus series adjustment” (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.
Funding: EPSRC DTG
Fri, 20 Mar 2015 00:00:00 GMThttp://hdl.handle.net/10023/64632015-03-20T00:00:00ZMiller, David LawrenceThomas, LenWe present a new class of models for the detection function in distance sampling surveys of wildlife populations, based on finite mixtures of simple parametric key functions such as the half-normal. The models share many of the features of the widely-used “key function plus series adjustment” (K+A) formulation: they are flexible, produce plausible shapes with a small number of parameters, allow incorporation of covariates in addition to distance and can be fitted using maximum likelihood. One important advantage over the K+A approach is that the mixtures are automatically monotonic non-increasing and non-negative, so constrained optimization is not required to ensure distance sampling assumptions are honoured. We compare the mixture formulation to the K+A approach using simulations to evaluate its applicability in a wide set of challenging situations. We also re-analyze four previously problematic real-world case studies. We find mixtures outperform K+A methods in many cases, particularly spiked line transect data (i.e., where detectability drops rapidly at small distances) and larger sample sizes. We recommend that current standard model selection methods for distance sampling detection functions are extended to include mixture models in the candidate set.Most switching classes with primitive automorphism groups contain graphs with trivial groups
http://hdl.handle.net/10023/6429
The operation of switching a graph Gamma with respect to a subset X of the vertex set interchanges edges and non-edges between X and its complement, leaving the rest of the graph unchanged. This is an equivalence relation on the set of graphs on a given vertex set, so we can talk about the automorphism group of a switching class of graphs. It might be thought that switching classes with many automorphisms would have the property that all their graphs also have many automorphisms. But the main theorem of this paper shows a different picture: with finitely many exceptions, if a non-trivial switching class S has primitive automorphism group, then it contains a graph whose automorphism group is trivial. We also find all the exceptional switching classes; up to complementation, there are just six.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/64292015-06-01T00:00:00ZCameron, Peter JephsonSpiga, PabloThe operation of switching a graph Gamma with respect to a subset X of the vertex set interchanges edges and non-edges between X and its complement, leaving the rest of the graph unchanged. This is an equivalence relation on the set of graphs on a given vertex set, so we can talk about the automorphism group of a switching class of graphs. It might be thought that switching classes with many automorphisms would have the property that all their graphs also have many automorphisms. But the main theorem of this paper shows a different picture: with finitely many exceptions, if a non-trivial switching class S has primitive automorphism group, then it contains a graph whose automorphism group is trivial. We also find all the exceptional switching classes; up to complementation, there are just six.Random coeffcient models for complex longitudinal data
http://hdl.handle.net/10023/6386
Longitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data.
Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions.
Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.
Fri, 27 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/63862014-06-27T00:00:00ZKidney, DarrenLongitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data.
Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions.
Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.Higher biodiversity is required to sustain multiple ecosystem processes across temperature regimes
http://hdl.handle.net/10023/5975
Biodiversity loss is occurring rapidly worldwide, yet it is uncertain whether few or many species are required to sustain ecosystem functioning in the face of environmental change. The importance of biodiversity might be enhanced when multiple ecosystem processes (termed multifunctionality) and environmental contexts are considered, yet no studies have quantified this explicitly to date. We measured five key processes and their combined multifunctionality at three temperatures (5, 10 and 15 °C) in freshwater aquaria containing different animal assemblages (1-4 benthic macroinvertebrate species). For single processes, biodiversity effects were weak and were best predicted by additive-based models, i.e. polyculture performances represented the sum of their monoculture parts. There were, however, significant effects of biodiversity on multifunctionality at the low and the high (but not the intermediate) temperature. Variation in the contribution of species to processes across temperatures meant that greater biodiversity was required to sustain multifunctionality across different temperatures than was the case for single processes. This suggests that previous studies might have underestimated the importance of biodiversity in sustaining ecosystem functioning in a changing environment.
The authors thank the Natural Environment Research Council for financial support awarded to G. W. (Grant reference: NE/D013305/1) that funded D. M. P.'s research. Accepted 11 July 2014.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/59752015-01-01T00:00:00ZPerkins, D.M.Bailey, R.A.Dossena, M.Gamfeldt, L.Reiss, J.Trimmer, M.Woodward, G.Biodiversity loss is occurring rapidly worldwide, yet it is uncertain whether few or many species are required to sustain ecosystem functioning in the face of environmental change. The importance of biodiversity might be enhanced when multiple ecosystem processes (termed multifunctionality) and environmental contexts are considered, yet no studies have quantified this explicitly to date. We measured five key processes and their combined multifunctionality at three temperatures (5, 10 and 15 °C) in freshwater aquaria containing different animal assemblages (1-4 benthic macroinvertebrate species). For single processes, biodiversity effects were weak and were best predicted by additive-based models, i.e. polyculture performances represented the sum of their monoculture parts. There were, however, significant effects of biodiversity on multifunctionality at the low and the high (but not the intermediate) temperature. Variation in the contribution of species to processes across temperatures meant that greater biodiversity was required to sustain multifunctionality across different temperatures than was the case for single processes. This suggests that previous studies might have underestimated the importance of biodiversity in sustaining ecosystem functioning in a changing environment.Optimal cross-over designs for full interaction models
http://hdl.handle.net/10023/5768
We consider repeated measurement designs when a residual or carry-over effect may be present in at most one later period. Since assuming an additive model may be unrealistic for some applications and leads to biased estimation of treatment effects, we consider a model with interactions between carry-over and direct treatment effects. When the aim of the experiment is to study the effects of a treatment used alone, we obtain universally optimal approximate designs. We also propose some efficient designs with a reduced number of subjects.
July 2014
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57682014-11-01T00:00:00ZBailey, Rosemary AnneDruilhet, PierreWe consider repeated measurement designs when a residual or carry-over effect may be present in at most one later period. Since assuming an additive model may be unrealistic for some applications and leads to biased estimation of treatment effects, we consider a model with interactions between carry-over and direct treatment effects. When the aim of the experiment is to study the effects of a treatment used alone, we obtain universally optimal approximate designs. We also propose some efficient designs with a reduced number of subjects.Most primitive groups are full automorphism groups of edge-transitive hypergraphs
http://hdl.handle.net/10023/5580
We prove that, for a primitive permutation group G acting on a set of size n, other than the alternating group, the probability that Aut(X,YG) = G for a random subset Y of X, tends to 1 as n tends to infinity. So the property of the title holds for all primitive groups except the alternating groups and finitely many others. This answers a question of M. Klin. Moreover, we give an upper bound n1/2+ε for the minimum size of the edges in such a hypergraph. This is essentially best possible.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/55802015-01-01T00:00:00ZBabai, LaszloCameron, Peter JephsonWe prove that, for a primitive permutation group G acting on a set of size n, other than the alternating group, the probability that Aut(X,YG) = G for a random subset Y of X, tends to 1 as n tends to infinity. So the property of the title holds for all primitive groups except the alternating groups and finitely many others. This answers a question of M. Klin. Moreover, we give an upper bound n1/2+ε for the minimum size of the edges in such a hypergraph. This is essentially best possible.The effects of acoustic misclassification on cetacean species abundance estimation
http://hdl.handle.net/10023/5163
To estimate the density or abundance of a cetacean species using acoustic detection data, it is necessary to correctly identify the species that are detected. Developing an automated species classifier with 100% correct classification rate for any species is likely to stay out of reach. It is therefore necessary to consider the effect of misidentified detections on the number of observed data and consequently on abundance or density estimation, and develop methods to cope with these misidentifications. If misclassification rates are known, it is possible to estimate the true numbers of detected calls without bias. However, misclassification and uncertainties in the level of misclassification increase the variance of the estimates. If the true numbers of calls from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty around the classification probabilities does not have an overly detrimental effect on the overall variance. However, if there is a difference in the encounter rate between species calls and/or a large amount of uncertainty in misclassification rates, then the variance of the estimates becomes very large and this dramatically increases the variance of the final abundance estimate.
This work was funded through the Natural Environment Research Council and SMRU Ltd.
Wed, 25 Dec 2013 00:00:00 GMThttp://hdl.handle.net/10023/51632013-12-25T00:00:00ZCaillat, Marjolaine AnnieThomas, LenGillespie, Douglas MichaelTo estimate the density or abundance of a cetacean species using acoustic detection data, it is necessary to correctly identify the species that are detected. Developing an automated species classifier with 100% correct classification rate for any species is likely to stay out of reach. It is therefore necessary to consider the effect of misidentified detections on the number of observed data and consequently on abundance or density estimation, and develop methods to cope with these misidentifications. If misclassification rates are known, it is possible to estimate the true numbers of detected calls without bias. However, misclassification and uncertainties in the level of misclassification increase the variance of the estimates. If the true numbers of calls from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty around the classification probabilities does not have an overly detrimental effect on the overall variance. However, if there is a difference in the encounter rate between species calls and/or a large amount of uncertainty in misclassification rates, then the variance of the estimates becomes very large and this dramatically increases the variance of the final abundance estimate.Using hidden Markov models to deal with availability bias on line transect surveys
http://hdl.handle.net/10023/5017
We develop estimators for line transect surveys of animals that are stochastically unavailable for detection while within detection range. The detection process is formulated as a hidden Markov model with a binary state-dependent observation model that depends on both perpendicular and forward distances. This provides a parametric method of dealing with availability bias when estimates of availability process parameters are available even if series of availability events themselves are not. We apply the estimators to an aerial and a shipboard survey of whales, and investigate their properties by simulation. They are shown to be more general and more flexible than existing estimators based on parametric models of the availability process. We also find that methods using availability correction factors can be very biased when surveys are not close to being instantaneous, as can estimators that assume temporal independence in availability when there is temporal dependence.
This work was supported by EPSRC grant EP/I000917/1
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/50172013-01-01T00:00:00ZBorchers, David LouisZucchini, WalterHeide-Jørgensen, M.P.Cañadas, A.Langrock, RolandWe develop estimators for line transect surveys of animals that are stochastically unavailable for detection while within detection range. The detection process is formulated as a hidden Markov model with a binary state-dependent observation model that depends on both perpendicular and forward distances. This provides a parametric method of dealing with availability bias when estimates of availability process parameters are available even if series of availability events themselves are not. We apply the estimators to an aerial and a shipboard survey of whales, and investigate their properties by simulation. They are shown to be more general and more flexible than existing estimators based on parametric models of the availability process. We also find that methods using availability correction factors can be very biased when surveys are not close to being instantaneous, as can estimators that assume temporal independence in availability when there is temporal dependence.Novel methods for species distribution mapping including spatial models in complex regions
http://hdl.handle.net/10023/4514
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment.
The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets.
Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting.
There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises.
Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the
Scottish network of Nature Conservation Marine Protected Areas.
Tue, 05 Nov 2013 00:00:00 GMThttp://hdl.handle.net/10023/45142013-11-05T00:00:00ZScott-Hayward, Lindesay Alexandra SarahSpecies Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment.
The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets.
Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting.
There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises.
Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the
Scottish network of Nature Conservation Marine Protected Areas.Modelling catch sampling uncertainty in fisheries stock assessment : the Atlantic-Iberian sardine case
http://hdl.handle.net/10023/4474
The statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS)
stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from
sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment
models so that management decisions are based on realistic evaluation of the uncertainty about
the status of the stock. The main goal of this study is to develop a stock assessment framework
that accounts for some of the uncertainties associated with the AIS stock that are currently not
integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty
arising from the catch data sampling process.
The central innovation the thesis is the development of a Bayesian integrated stock assessment
(ISA) model, in which an observation model explicitly links stock dynamics parameters
with statistical models for the various types of data observed from catches of the AIS stock.
This allows for systematic and statistically consistent propagation of the uncertainty inherent in
the catch sampling process across the whole stock assessment model, through to estimates of
biomass and stock parameters. The method is tested by simulations and found to provide reliable
and accurate estimates of stock parameters and associated uncertainty, while also outperforming
existing designed-based and model-based estimation approaches.
The method is computationally very demanding and this is an obstacle to its adoption
by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed
and presented in this thesis could provide an important contribution to the improvement in the
evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other
fish stock with similar data and dynamics structure. Furthermore, the models developed in this
study establish a solid conceptual platform to allow future development of more complex models
of fish population dynamics.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/44742013-01-01T00:00:00ZCaneco, BrunoThe statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS)
stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from
sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment
models so that management decisions are based on realistic evaluation of the uncertainty about
the status of the stock. The main goal of this study is to develop a stock assessment framework
that accounts for some of the uncertainties associated with the AIS stock that are currently not
integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty
arising from the catch data sampling process.
The central innovation the thesis is the development of a Bayesian integrated stock assessment
(ISA) model, in which an observation model explicitly links stock dynamics parameters
with statistical models for the various types of data observed from catches of the AIS stock.
This allows for systematic and statistically consistent propagation of the uncertainty inherent in
the catch sampling process across the whole stock assessment model, through to estimates of
biomass and stock parameters. The method is tested by simulations and found to provide reliable
and accurate estimates of stock parameters and associated uncertainty, while also outperforming
existing designed-based and model-based estimation approaches.
The method is computationally very demanding and this is an obstacle to its adoption
by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed
and presented in this thesis could provide an important contribution to the improvement in the
evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other
fish stock with similar data and dynamics structure. Furthermore, the models developed in this
study establish a solid conceptual platform to allow future development of more complex models
of fish population dynamics.Using energetic models to investigate the survival and reproduction of beaked whales (family Ziphiidae)
http://hdl.handle.net/10023/4053
Mass stranding of several species of beaked whales (family Ziphiidae) associated with exposure to anthropogenic sounds has raised concern for the conservation of these species. However, little is known about the species’ life histories, prey or habitat requirements. Without this knowledge, it becomes difficult to assess the effects of anthropogenic sound, since there is no way to determine whether the disturbance is impacting the species’ physical or environmental requirements. Here we take a bioenergetics approach to address this gap in our knowledge, as the elusive, deep-diving nature of beaked whales has made it hard to study these effects directly. We develop a model for Ziphiidae linking feeding energetics to the species’ requirements for survival and reproduction, since these life history traits would be the most likely to be impacted by non-lethal disturbances. Our models suggest that beaked whale reproduction requires energy dense prey, and that poor resource availability would lead to an extension of the inter-calving interval. Further, given current information, it seems that some beaked whale species require relatively high quality habitat in order to meet their requirements for survival and reproduction. As a result, even a small non-lethal disturbance that results in displacement of whales from preferred habitats could potentially impact a population if a significant proportion of that population was affected. We explored the impact of varying ecological parameters and model assumptions on survival and reproduction, and find that calf and fetus survival appear more readily affected than the survival of adult females.
Wed, 17 Jul 2013 00:00:00 GMThttp://hdl.handle.net/10023/40532013-07-17T00:00:00ZNew, Leslie FrancesMoretti, DavidHooker, Sascha KateCosta, Daniel P.Simmons, Samantha E.Mass stranding of several species of beaked whales (family Ziphiidae) associated with exposure to anthropogenic sounds has raised concern for the conservation of these species. However, little is known about the species’ life histories, prey or habitat requirements. Without this knowledge, it becomes difficult to assess the effects of anthropogenic sound, since there is no way to determine whether the disturbance is impacting the species’ physical or environmental requirements. Here we take a bioenergetics approach to address this gap in our knowledge, as the elusive, deep-diving nature of beaked whales has made it hard to study these effects directly. We develop a model for Ziphiidae linking feeding energetics to the species’ requirements for survival and reproduction, since these life history traits would be the most likely to be impacted by non-lethal disturbances. Our models suggest that beaked whale reproduction requires energy dense prey, and that poor resource availability would lead to an extension of the inter-calving interval. Further, given current information, it seems that some beaked whale species require relatively high quality habitat in order to meet their requirements for survival and reproduction. As a result, even a small non-lethal disturbance that results in displacement of whales from preferred habitats could potentially impact a population if a significant proportion of that population was affected. We explored the impact of varying ecological parameters and model assumptions on survival and reproduction, and find that calf and fetus survival appear more readily affected than the survival of adult females.Estimating wildlife distribution and abundance from line transect surveys conducted from platforms of opportunity
http://hdl.handle.net/10023/3727
Line transect data obtained from 'platforms of opportunity' are useful for the monitoring
of long term trends in dolphin populations which occur over vast areas, yet analyses of
such data axe problematic due to violation of fundamental assumptions of line transect
methodology. In this thesis we develop methods which allow estimates of dolphin relative
abundance to be obtained when certain assumptions of line transect sampling are violated.
Generalised additive models are used to model encounter rate and mean school size as
a function of spatially and temporally referenced covariates. The estimated relationship
between the response and the environmental and locational covariates is then used to
obtain a predicted surface for the response over the entire survey region. Given those
predicted surfaces, a density surface can then be obtained and an estimate of abundance
computed by numerically integrating over the entire survey region. This approach is
particularly useful when search effort is not random, in which case standard line transect
methods would yield biased estimates.
Estimates of f (0) (the inverse of the effective strip (half-)width), an essential component
of the line transect estimator, may also be biased due to heterogeneity in detection probabilities.
We developed a conditional likelihood approach in which covariate effects are
directly incorporated into the estimation procedure. Simulation results indicated that the
method performs well in the presence of size-bias. When multiple covariates are used, it
is important that covariate selection be carried out.
As an example we applied the methods described above to eastern tropical Pacific dolphin
stocks. However, uncertainty in stock identification has never been directly incorporated
into methods used to obtain estimates of relative or absolute abundance. Therefore we
illustrate an approach in which trends in dolphin relative abundance axe monitored by
small areas, rather than stocks.
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/37272001-01-01T00:00:00ZMarques, Fernanda F. C.Line transect data obtained from 'platforms of opportunity' are useful for the monitoring
of long term trends in dolphin populations which occur over vast areas, yet analyses of
such data axe problematic due to violation of fundamental assumptions of line transect
methodology. In this thesis we develop methods which allow estimates of dolphin relative
abundance to be obtained when certain assumptions of line transect sampling are violated.
Generalised additive models are used to model encounter rate and mean school size as
a function of spatially and temporally referenced covariates. The estimated relationship
between the response and the environmental and locational covariates is then used to
obtain a predicted surface for the response over the entire survey region. Given those
predicted surfaces, a density surface can then be obtained and an estimate of abundance
computed by numerically integrating over the entire survey region. This approach is
particularly useful when search effort is not random, in which case standard line transect
methods would yield biased estimates.
Estimates of f (0) (the inverse of the effective strip (half-)width), an essential component
of the line transect estimator, may also be biased due to heterogeneity in detection probabilities.
We developed a conditional likelihood approach in which covariate effects are
directly incorporated into the estimation procedure. Simulation results indicated that the
method performs well in the presence of size-bias. When multiple covariates are used, it
is important that covariate selection be carried out.
As an example we applied the methods described above to eastern tropical Pacific dolphin
stocks. However, uncertainty in stock identification has never been directly incorporated
into methods used to obtain estimates of relative or absolute abundance. Therefore we
illustrate an approach in which trends in dolphin relative abundance axe monitored by
small areas, rather than stocks.Bayesian point process modelling of ecological communities
http://hdl.handle.net/10023/3710
The modelling of biological communities is important to further the understanding
of species coexistence and the mechanisms involved in maintaining
biodiversity. This involves considering not only interactions between individual
biological organisms, but also the incorporation of covariate information,
if available, in the modelling process. This thesis explores the use
of point processes to model interactions in bivariate point patterns within
a Bayesian framework, and, where applicable, in conjunction with covariate
data. Specifically, we distinguish between symmetric and asymmetric species
interactions and model these using appropriate point processes. In this thesis
we consider both pairwise and area interaction point processes to allow for
inhibitory interactions and both inhibitory and attractive interactions.
It is envisaged that the analyses and innovations presented in this thesis
will contribute to the parsimonious modelling of biological communities.
Fri, 28 Jun 2013 00:00:00 GMThttp://hdl.handle.net/10023/37102013-06-28T00:00:00ZNightingale, Glenna FaithThe modelling of biological communities is important to further the understanding
of species coexistence and the mechanisms involved in maintaining
biodiversity. This involves considering not only interactions between individual
biological organisms, but also the incorporation of covariate information,
if available, in the modelling process. This thesis explores the use
of point processes to model interactions in bivariate point patterns within
a Bayesian framework, and, where applicable, in conjunction with covariate
data. Specifically, we distinguish between symmetric and asymmetric species
interactions and model these using appropriate point processes. In this thesis
we consider both pairwise and area interaction point processes to allow for
inhibitory interactions and both inhibitory and attractive interactions.
It is envisaged that the analyses and innovations presented in this thesis
will contribute to the parsimonious modelling of biological communities.Animal population estimation using mark-recapture and plant-capture
http://hdl.handle.net/10023/3655
Mark-recapture is a method of population estimation that involves capturing a number
of animals from a population of unknown size on several occasions, and marking
those animals that are caught each time. By observing the number of marked
animals that are subsequently seen, estimates of the total population size can be
made. There are various subclasses of the mark-recapture method called the Otis-class
of models (Otis, Burnham, White & Anderson 1978). These relate to the
assumed behaviour of the individuals in the target population.
More recent work has generalised the theory of mark-recapture to the so-called
plant-capture, where a known number of animals are pre-inserted into the target
population. Sampling is then carried out as normal, but with additional information
coming from knowledge of the number of planted individuals.
The theory underpinning plant-capture is less well-developed than mark-recapture,
with the difference on population estimation of the former over the latter not often
tested. This thesis shows that, under fixed and random sample-size models, the
inclusion of plants can improve the mean point population estimation of various
estimators. The estimator of Pathak (1964) is generalised to allow for the inclusion
of plants into the target population. The results show that mean estimates from
most estimators, under most models, can be improved with the inclusion of plants,
and the sample standard deviations of the simulations can be reduced. This improvement
in mean point population estimation is particularly pronounced when
the number of animals captured is low.
Sample coverage, which is the proportion of distinct animals caught during sampling,
is also often sought by practitioners. Given here is a generalisation of the
inverse population estimator of Pathak (1964) to plant-capture and a proposed new
inverse population estimator, which can be used as estimates of the coverage of a
sample.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/36552012-01-01T00:00:00ZGormley, RichardMark-recapture is a method of population estimation that involves capturing a number
of animals from a population of unknown size on several occasions, and marking
those animals that are caught each time. By observing the number of marked
animals that are subsequently seen, estimates of the total population size can be
made. There are various subclasses of the mark-recapture method called the Otis-class
of models (Otis, Burnham, White & Anderson 1978). These relate to the
assumed behaviour of the individuals in the target population.
More recent work has generalised the theory of mark-recapture to the so-called
plant-capture, where a known number of animals are pre-inserted into the target
population. Sampling is then carried out as normal, but with additional information
coming from knowledge of the number of planted individuals.
The theory underpinning plant-capture is less well-developed than mark-recapture,
with the difference on population estimation of the former over the latter not often
tested. This thesis shows that, under fixed and random sample-size models, the
inclusion of plants can improve the mean point population estimation of various
estimators. The estimator of Pathak (1964) is generalised to allow for the inclusion
of plants into the target population. The results show that mean estimates from
most estimators, under most models, can be improved with the inclusion of plants,
and the sample standard deviations of the simulations can be reduced. This improvement
in mean point population estimation is particularly pronounced when
the number of animals captured is low.
Sample coverage, which is the proportion of distinct animals caught during sampling,
is also often sought by practitioners. Given here is a generalisation of the
inverse population estimator of Pathak (1964) to plant-capture and a proposed new
inverse population estimator, which can be used as estimates of the coverage of a
sample.Estimating anglerfish abundance from trawl surveys, and related problems
http://hdl.handle.net/10023/3652
The content of this thesis was motivated by the need to estimate anglerfish abundance
from stratified random trawl surveys of the anglerfish stock which occupies
the northern European shelf (Fernandes et al., 2007). The survey was conducted
annually from 2005 to 2010 in order to obtain age-structured estimates of absolute
abundance for this stock. An estimation method is considered to incorporate statistical models for herding, length-based net retention probability and missing age data and uncertainty from all of these sources in variance estimation.
A key component of abundance estimation is the estimation of capture probability.
Capture probability is estimated from the experimental survey data using various
logistic regression models with haul as a random effect. Conditional on the estimated
capture probability, a number of abundance estimators are developed and applied to
the anglerfish data. The abundance estimators differ in the way that the haul effect is incorporated. The performance of these estimators is investigated by simulation. An estimator with form similar to that conventionally used to estimate abundance from distance sampling surveys is found to perform best.
The estimators developed for the anglerfish survey data which incorporate random
effects in capture probability have wider application than trawl surveys. We examine
the analytic properties of these estimators when the capture/detection probability is
known. We apply these estimators to three different types of survey data in addition
to the anglerfish data, with different forms of random effects and investigate their
performance by simulation. We find that a generalization of the form of estimator
typically used on line transect surveys performs best overall. It has low bias, and
also the lowest bias and mean squared error among all the estimators we considered.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/36522012-01-01T00:00:00ZYuan, YuanThe content of this thesis was motivated by the need to estimate anglerfish abundance
from stratified random trawl surveys of the anglerfish stock which occupies
the northern European shelf (Fernandes et al., 2007). The survey was conducted
annually from 2005 to 2010 in order to obtain age-structured estimates of absolute
abundance for this stock. An estimation method is considered to incorporate statistical models for herding, length-based net retention probability and missing age data and uncertainty from all of these sources in variance estimation.
A key component of abundance estimation is the estimation of capture probability.
Capture probability is estimated from the experimental survey data using various
logistic regression models with haul as a random effect. Conditional on the estimated
capture probability, a number of abundance estimators are developed and applied to
the anglerfish data. The abundance estimators differ in the way that the haul effect is incorporated. The performance of these estimators is investigated by simulation. An estimator with form similar to that conventionally used to estimate abundance from distance sampling surveys is found to perform best.
The estimators developed for the anglerfish survey data which incorporate random
effects in capture probability have wider application than trawl surveys. We examine
the analytic properties of these estimators when the capture/detection probability is
known. We apply these estimators to three different types of survey data in addition
to the anglerfish data, with different forms of random effects and investigate their
performance by simulation. We find that a generalization of the form of estimator
typically used on line transect surveys performs best overall. It has low bias, and
also the lowest bias and mean squared error among all the estimators we considered.Mixed effect models in distance sampling
http://hdl.handle.net/10023/3618
Recently, much effort has been expended for improving conventional distance sampling methods, e.g. by replacing the design-based approach with a model-based approach where observed counts are related to environmental covariates (Hedley and Buckland, 2004) or by incorporating covariates in the detection function model (Marques and Buckland, 2003).
While these models have generally been limited to include fixed effects, we propose
four different methods for analysing distance sampling data using mixed effects models. These include an extension of the two-stage approach (Buckland et al., 2009),
where we include site random effects in the second-stage count model to account for
correlated counts at the same sites. We also present two integrated approaches which
include site random effects in the count model. These approaches combine the analysis stages for the detection and count models and allow simultaneous estimation of all
parameters. Furthermore, we develop a detection function model that incorporates
random effects. We also propose a novel Bayesian approach to analysing distance sampling data which uses a Metropolis-Hastings algorithm for updating model parameters and a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm for assessing model uncertainty. Lastly, we propose using hierarchical centering as a novel technique for improving model mixing and hence facilitating an RJMCMC algorithm for mixed models.
We analyse two case studies, both large-scale point transect surveys, where the interest lies in establishing the effects of conservation buffers on agricultural fields. For each case study, we compare the results from one integrated approach to those from
the extended two-stage approach. We find that these may differ in parameter estimates for covariates that were both in the detection and the count model and in model probabilities when model uncertainty was included in inference. The performance of the random effects based detection function is assessed via simulation and when heterogeneity in the data is present, one of the new estimators yields improved results compared to conventional distance sampling estimators.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/36182013-01-01T00:00:00ZOedekoven, Cornelia SabrinaRecently, much effort has been expended for improving conventional distance sampling methods, e.g. by replacing the design-based approach with a model-based approach where observed counts are related to environmental covariates (Hedley and Buckland, 2004) or by incorporating covariates in the detection function model (Marques and Buckland, 2003).
While these models have generally been limited to include fixed effects, we propose
four different methods for analysing distance sampling data using mixed effects models. These include an extension of the two-stage approach (Buckland et al., 2009),
where we include site random effects in the second-stage count model to account for
correlated counts at the same sites. We also present two integrated approaches which
include site random effects in the count model. These approaches combine the analysis stages for the detection and count models and allow simultaneous estimation of all
parameters. Furthermore, we develop a detection function model that incorporates
random effects. We also propose a novel Bayesian approach to analysing distance sampling data which uses a Metropolis-Hastings algorithm for updating model parameters and a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm for assessing model uncertainty. Lastly, we propose using hierarchical centering as a novel technique for improving model mixing and hence facilitating an RJMCMC algorithm for mixed models.
We analyse two case studies, both large-scale point transect surveys, where the interest lies in establishing the effects of conservation buffers on agricultural fields. For each case study, we compare the results from one integrated approach to those from
the extended two-stage approach. We find that these may differ in parameter estimates for covariates that were both in the detection and the count model and in model probabilities when model uncertainty was included in inference. The performance of the random effects based detection function is assessed via simulation and when heterogeneity in the data is present, one of the new estimators yields improved results compared to conventional distance sampling estimators.Decomposition tables for experiments. II. Two–one randomizations
http://hdl.handle.net/10023/3479
We investigate structure for pairs of randomizations that do not follow each other in a chain. These are unrandomized-inclusive, independent, coincident or double randomizations. This involves taking several structures that satisfy particular relations and combining them to form the appropriate orthogonal decomposition of the data space for the experiment. We show how to establish the decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated. This leads to recommendations for when the different types of multiple randomization should be used.
Fri, 01 Oct 2010 00:00:00 GMThttp://hdl.handle.net/10023/34792010-10-01T00:00:00ZBrien, C. J.Bailey, Rosemary AnneWe investigate structure for pairs of randomizations that do not follow each other in a chain. These are unrandomized-inclusive, independent, coincident or double randomizations. This involves taking several structures that satisfy particular relations and combining them to form the appropriate orthogonal decomposition of the data space for the experiment. We show how to establish the decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated. This leads to recommendations for when the different types of multiple randomization should be used.Decomposition tables for experiments I. A chain of randomizations
http://hdl.handle.net/10023/3478
One aspect of evaluating the design for an experiment is the discovery of the relationships between subspaces of the data space. Initially we establish the notation and methods for evaluating an experiment with a single randomization. Starting with two structures, or orthogonal decompositions of the data space, we describe how to combine them to form the overall decomposition for a single-randomization experiment that is "structure balanced." The relationships between the two structures are characterized using efficiency factors. The decomposition is encapsulated in a decomposition table. Then, for experiments that involve multiple randomizations forming a chain, we take several structures that pairwise are structure balanced and combine them to establish the form of the orthogonal decomposition for the experiment. In particular, it is proven that the properties of the design for Such an experiment are derived in a straightforward manner from those of the individual designs. We show how to formulate an extended decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated.
Tue, 01 Dec 2009 00:00:00 GMThttp://hdl.handle.net/10023/34782009-12-01T00:00:00ZBrien, C. J.Bailey, Rosemary AnneOne aspect of evaluating the design for an experiment is the discovery of the relationships between subspaces of the data space. Initially we establish the notation and methods for evaluating an experiment with a single randomization. Starting with two structures, or orthogonal decompositions of the data space, we describe how to combine them to form the overall decomposition for a single-randomization experiment that is "structure balanced." The relationships between the two structures are characterized using efficiency factors. The decomposition is encapsulated in a decomposition table. Then, for experiments that involve multiple randomizations forming a chain, we take several structures that pairwise are structure balanced and combine them to establish the form of the orthogonal decomposition for the experiment. In particular, it is proven that the properties of the design for Such an experiment are derived in a straightforward manner from those of the individual designs. We show how to formulate an extended decomposition table giving the sources of variation, their relationships and their degrees of freedom, so that competing designs can be evaluated.Quantifying biodiversity trends in time and space
http://hdl.handle.net/10023/3414
The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity.
Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability.
Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species.
Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time.
Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.
Fri, 30 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/34142012-11-30T00:00:00ZStudeny, Angelika C.The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity.
Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability.
Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species.
Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time.
Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.Finite and infinite ergodic theory for linear and conformal dynamical systems
http://hdl.handle.net/10023/3220
The first main topic of this thesis is the thorough analysis of two families of piecewise linear
maps on the unit interval, the α-Lüroth and α-Farey maps. Here, α denotes a countably infinite
partition of the unit interval whose atoms only accumulate at the origin. The basic properties
of these maps will be developed, including that each α-Lüroth map (denoted Lα) gives rise to a
series expansion of real numbers in [0,1], a certain type of Generalised Lüroth Series. The first
example of such an expansion was given by Lüroth. The map Lα is the jump transformation
of the corresponding α-Farey map Fα. The maps Lα and Fα share the same relationship as the
classical Farey and Gauss maps which give rise to the continued fraction expansion of a real
number. We also consider the topological properties of Fα and some Diophantine-type sets of
numbers expressed in terms of the α-Lüroth expansion.
Next we investigate certain ergodic-theoretic properties of the maps Lα and Fα. It will turn
out that the Lebesgue measure λ is invariant for every map Lα and that there exists a unique
Lebesgue-absolutely continuous invariant measure for Fα. We will give a precise expression for
the density of this measure. Our main result is that both Lα and Fα are exact, and thus ergodic.
The interest in the invariant measure for Fα lies in the fact that under a particular condition on
the underlying partition α, the invariant measure associated to the map Fα is infinite.
Then we proceed to introduce and examine the sequence of α-sum-level sets arising from
the α-Lüroth map, for an arbitrary given partition α. These sets can be written dynamically in
terms of Fα. The main result concerning the α-sum-level sets is to establish weak and strong
renewal laws. Note that for the Farey map and the Gauss map, the analogue of this result has
been obtained by Kesseböhmer and Stratmann. There the results were derived by using advanced
infinite ergodic theory, rather than the strong renewal theorems employed here. This underlines
the fact that one of the main ingredients of infinite ergodic theory is provided by some delicate
estimates in renewal theory.
Our final main result concerning the α-Lüroth and α-Farey systems is to provide a fractal-geometric
description of the Lyapunov spectra associated with each of the maps Lα and Fα.
The Lyapunov spectra for the Farey map and the Gauss map have been investigated in detail by
Kesseböhmer and Stratmann. The Farey map and the Gauss map are non-linear, whereas the
systems we consider are always piecewise linear. However, since our analysis is based on a large
family of different partitions of U , the class of maps which we consider in this paper allows us
to detect a variety of interesting new phenomena, including that of phase transitions.
Finally, we come to the conformal systems of the title. These are the limit sets of discrete
subgroups of the group of isometries of the hyperbolic plane. For these so-called Fuchsian
groups, our first main result is to establish the Hausdorff dimension of some Diophantine-type
sets contained in the limit set that are similar to those considered for the maps Lα. These sets
are then used in our second main result to analyse the more geometrically defined strict-Jarník
limit set of a Fuchsian group. Finally, we obtain a “weak multifractal spectrum” for the Patterson
measure associated to the Fuchsian group.
Wed, 30 Nov 2011 00:00:00 GMThttp://hdl.handle.net/10023/32202011-11-30T00:00:00ZMunday, SaraThe first main topic of this thesis is the thorough analysis of two families of piecewise linear
maps on the unit interval, the α-Lüroth and α-Farey maps. Here, α denotes a countably infinite
partition of the unit interval whose atoms only accumulate at the origin. The basic properties
of these maps will be developed, including that each α-Lüroth map (denoted Lα) gives rise to a
series expansion of real numbers in [0,1], a certain type of Generalised Lüroth Series. The first
example of such an expansion was given by Lüroth. The map Lα is the jump transformation
of the corresponding α-Farey map Fα. The maps Lα and Fα share the same relationship as the
classical Farey and Gauss maps which give rise to the continued fraction expansion of a real
number. We also consider the topological properties of Fα and some Diophantine-type sets of
numbers expressed in terms of the α-Lüroth expansion.
Next we investigate certain ergodic-theoretic properties of the maps Lα and Fα. It will turn
out that the Lebesgue measure λ is invariant for every map Lα and that there exists a unique
Lebesgue-absolutely continuous invariant measure for Fα. We will give a precise expression for
the density of this measure. Our main result is that both Lα and Fα are exact, and thus ergodic.
The interest in the invariant measure for Fα lies in the fact that under a particular condition on
the underlying partition α, the invariant measure associated to the map Fα is infinite.
Then we proceed to introduce and examine the sequence of α-sum-level sets arising from
the α-Lüroth map, for an arbitrary given partition α. These sets can be written dynamically in
terms of Fα. The main result concerning the α-sum-level sets is to establish weak and strong
renewal laws. Note that for the Farey map and the Gauss map, the analogue of this result has
been obtained by Kesseböhmer and Stratmann. There the results were derived by using advanced
infinite ergodic theory, rather than the strong renewal theorems employed here. This underlines
the fact that one of the main ingredients of infinite ergodic theory is provided by some delicate
estimates in renewal theory.
Our final main result concerning the α-Lüroth and α-Farey systems is to provide a fractal-geometric
description of the Lyapunov spectra associated with each of the maps Lα and Fα.
The Lyapunov spectra for the Farey map and the Gauss map have been investigated in detail by
Kesseböhmer and Stratmann. The Farey map and the Gauss map are non-linear, whereas the
systems we consider are always piecewise linear. However, since our analysis is based on a large
family of different partitions of U , the class of maps which we consider in this paper allows us
to detect a variety of interesting new phenomena, including that of phase transitions.
Finally, we come to the conformal systems of the title. These are the limit sets of discrete
subgroups of the group of isometries of the hyperbolic plane. For these so-called Fuchsian
groups, our first main result is to establish the Hausdorff dimension of some Diophantine-type
sets contained in the limit set that are similar to those considered for the maps Lα. These sets
are then used in our second main result to analyse the more geometrically defined strict-Jarník
limit set of a Fuchsian group. Finally, we obtain a “weak multifractal spectrum” for the Patterson
measure associated to the Fuchsian group.Spatial patterns and species coexistence : using spatial statistics to identify underlying ecological processes in plant communities
http://hdl.handle.net/10023/3084
The use of spatial statistics to investigate ecological processes in plant communities is becoming increasingly widespread. In diverse communities such as tropical rainforests, analysis of spatial structure may help to unravel the various processes that act and interact to maintain high levels of diversity. In particular, a number of contrasting mechanisms have been suggested to explain species coexistence, and these differ greatly in their practical implications for the ecology and conservation of tropical forests. Traditional first-order measures of community structure have proved unable to distinguish these mechanisms in practice, but statistics that describe spatial structure may be able to do so. This is of great interest and relevance as spatially explicit data become available for a range of ecological communities and analysis methods for these data become more accessible.
This thesis investigates the potential for inference about underlying ecological processes in plant communities using spatial statistics. Current methodologies for spatial analysis are reviewed and extended, and are used to characterise the spatial signals of the principal theorised mechanisms of coexistence. The sensitivity of a range of spatial statistics to these signals is assessed, and the strength of such signals in natural communities is investigated.
The spatial signals of the processes considered here are found to be strong and robust to modelled stochastic variation. Several new and existing spatial statistics are found to be sensitive to these signals, and offer great promise for inference about underlying processes from empirical data. The relative strengths of particular processes are found to vary between natural communities, with any one theory being insufficient to explain observed patterns. This thesis extends both understanding of species coexistence in diverse plant communities and the methodology for assessing underlying process in particular cases. It demonstrates that the potential of spatial statistics in ecology is great and largely unexplored.
Thu, 01 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/30842012-11-01T00:00:00ZBrown, CalumThe use of spatial statistics to investigate ecological processes in plant communities is becoming increasingly widespread. In diverse communities such as tropical rainforests, analysis of spatial structure may help to unravel the various processes that act and interact to maintain high levels of diversity. In particular, a number of contrasting mechanisms have been suggested to explain species coexistence, and these differ greatly in their practical implications for the ecology and conservation of tropical forests. Traditional first-order measures of community structure have proved unable to distinguish these mechanisms in practice, but statistics that describe spatial structure may be able to do so. This is of great interest and relevance as spatially explicit data become available for a range of ecological communities and analysis methods for these data become more accessible.
This thesis investigates the potential for inference about underlying ecological processes in plant communities using spatial statistics. Current methodologies for spatial analysis are reviewed and extended, and are used to characterise the spatial signals of the principal theorised mechanisms of coexistence. The sensitivity of a range of spatial statistics to these signals is assessed, and the strength of such signals in natural communities is investigated.
The spatial signals of the processes considered here are found to be strong and robust to modelled stochastic variation. Several new and existing spatial statistics are found to be sensitive to these signals, and offer great promise for inference about underlying processes from empirical data. The relative strengths of particular processes are found to vary between natural communities, with any one theory being insufficient to explain observed patterns. This thesis extends both understanding of species coexistence in diverse plant communities and the methodology for assessing underlying process in particular cases. It demonstrates that the potential of spatial statistics in ecology is great and largely unexplored.Vessel noise affects beaked whale behavior : Results of a dedicated acoustic response study
http://hdl.handle.net/10023/3078
Some beaked whale species are susceptible to the detrimental effects of anthropogenic noise. Most studies have concentrated on the effects of military sonar, but other forms of acoustic disturbance (e.g. shipping noise) may disrupt behavior. An experiment involving the exposure of target whale groups to intense vessel-generated noise tested how these exposures influenced the foraging behavior of Blainville’s beaked whales (Mesoplodon densirostris) in the Tongue of the Ocean (Bahamas). A military array of bottom-mounted hydrophones was used to measure the response based upon changes in the spatial and temporal pattern of vocalizations. The archived acoustic data were used to compute metrics the echolocation-based foraging behavior for 16 targeted groups, 10 groups further away on the range, and 26 nonexposed groups. The duration of foraging bouts was not significantly affected by the exposure. Changes in the hydrophone over which the group was most frequently detected occurred as the animals moved around within a foraging bout, and their number was significantly less the closer the whales were to the sound source. Non-exposed groups also had significantly more changes in the primary hydrophone than exposed groups irrespective of distance. Our results suggested that broadband ship noise caused a significant change in beaked whale behavior up to at least 5.2 kilometers away from the vessel. The observed change could potentially correspond to a restriction in the movement of groups, a period of more directional travel, a reduction in the number of individuals clicking within the group, or a response to changes in prey movement.
Fri, 03 Aug 2012 00:00:00 GMThttp://hdl.handle.net/10023/30782012-08-03T00:00:00ZPirotta, EnricoMilor, RachelQuick, Nicola JaneMoretti, DavidDimarzio, NancyTyack, Peter LloydBoyd, IanHastie, Gordon DrummondSome beaked whale species are susceptible to the detrimental effects of anthropogenic noise. Most studies have concentrated on the effects of military sonar, but other forms of acoustic disturbance (e.g. shipping noise) may disrupt behavior. An experiment involving the exposure of target whale groups to intense vessel-generated noise tested how these exposures influenced the foraging behavior of Blainville’s beaked whales (Mesoplodon densirostris) in the Tongue of the Ocean (Bahamas). A military array of bottom-mounted hydrophones was used to measure the response based upon changes in the spatial and temporal pattern of vocalizations. The archived acoustic data were used to compute metrics the echolocation-based foraging behavior for 16 targeted groups, 10 groups further away on the range, and 26 nonexposed groups. The duration of foraging bouts was not significantly affected by the exposure. Changes in the hydrophone over which the group was most frequently detected occurred as the animals moved around within a foraging bout, and their number was significantly less the closer the whales were to the sound source. Non-exposed groups also had significantly more changes in the primary hydrophone than exposed groups irrespective of distance. Our results suggested that broadband ship noise caused a significant change in beaked whale behavior up to at least 5.2 kilometers away from the vessel. The observed change could potentially correspond to a restriction in the movement of groups, a period of more directional travel, a reduction in the number of individuals clicking within the group, or a response to changes in prey movement.Estimating abundance of rare, small mammals: A case study of the Key Largo woodrat (Neotoma floridana smalli)
http://hdl.handle.net/10023/2068
Estimates of animal abundance or density are fundamental quantities in ecology and conservation, but for many species such as rare, small mammals, obtaining robust estimates is problematic. In this thesis, I combine elements of two standard abundance estimation methods, capture-recapture and distance sampling, to develop a method called trapping point transects (TPT). In TPT, a "detection function", g(r) (i.e. the probability of capturing an animal, given it is r m from a trap when the trap is set) is estimated using a subset of animals whose locations are known prior to traps being set. Generalised linear models are used to estimate the detection function, and the model can be extended to include random effects to allow for heterogeneity in capture probabilities. Standard point transect methods are modified to estimate abundance. Two abundance estimators are available. The first estimator is based on the reciprocal of the expected probability of detecting an animal, ^P, where the expectation is over r;
whereas the second estimator is the expectation of the reciprocal of ^P.
Performance of the TPT method under various sampling efforts and underlying true detection probabilities of individuals in the population was investigated in a simulation study. When underlying probability of detection was high (g(0) = 0:88) and between-individual variation was small, survey effort could be surprisingly low (c. 510 trap nights) to yield low bias (c. 4%) in the two estimators;
but under certain situations, the second estimator can be extremely biased. Uncertainty and relative bias in population estimates increased with decreasing detectability and increasing between-individual variation.
Abundance of the Key Largo woodrat (Neotoma floridana smalli), an endangered rodent with a restricted geographic range, was estimated using TPT. The TPT method compared well to other viable methods (capture-recapture and spatially-explicit capture-recapture), in terms of both field practicality and cost. The TPT method may generally be useful in estimating animal abundance in trapping studies and variants of the TPT method are presented.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/20682011-01-01T00:00:00ZPotts, Joanne M.Estimates of animal abundance or density are fundamental quantities in ecology and conservation, but for many species such as rare, small mammals, obtaining robust estimates is problematic. In this thesis, I combine elements of two standard abundance estimation methods, capture-recapture and distance sampling, to develop a method called trapping point transects (TPT). In TPT, a "detection function", g(r) (i.e. the probability of capturing an animal, given it is r m from a trap when the trap is set) is estimated using a subset of animals whose locations are known prior to traps being set. Generalised linear models are used to estimate the detection function, and the model can be extended to include random effects to allow for heterogeneity in capture probabilities. Standard point transect methods are modified to estimate abundance. Two abundance estimators are available. The first estimator is based on the reciprocal of the expected probability of detecting an animal, ^P, where the expectation is over r;
whereas the second estimator is the expectation of the reciprocal of ^P.
Performance of the TPT method under various sampling efforts and underlying true detection probabilities of individuals in the population was investigated in a simulation study. When underlying probability of detection was high (g(0) = 0:88) and between-individual variation was small, survey effort could be surprisingly low (c. 510 trap nights) to yield low bias (c. 4%) in the two estimators;
but under certain situations, the second estimator can be extremely biased. Uncertainty and relative bias in population estimates increased with decreasing detectability and increasing between-individual variation.
Abundance of the Key Largo woodrat (Neotoma floridana smalli), an endangered rodent with a restricted geographic range, was estimated using TPT. The TPT method compared well to other viable methods (capture-recapture and spatially-explicit capture-recapture), in terms of both field practicality and cost. The TPT method may generally be useful in estimating animal abundance in trapping studies and variants of the TPT method are presented.Bayesian modelling of integrated data and its application to seabird populations
http://hdl.handle.net/10023/1635
Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework.
This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period.
Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.
Tue, 30 Nov 2010 00:00:00 GMThttp://hdl.handle.net/10023/16352010-11-30T00:00:00ZReynolds, Toby J.Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework.
This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period.
Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.Statistical models for the long-term monitoring of songbird populations: a Bayesian analysis of constant effort sites and ring-recovery data
http://hdl.handle.net/10023/885
To underpin and improve advice given to government and other interested parties
on the state of Britain’s common songbird populations, new models for
analysing ecological data are developed in this thesis. These models use data
from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme,
an annual bird-ringing programme in which catch effort is standardised. Data
from the CES scheme are routinely used to index abundance and productivity,
and to a lesser extent estimate adult survival rates. However, two features of
the CES data that complicate analysis were previously inadequately addressed,
namely the presence in the catch of “transient” birds not associated with the
local population, and the sporadic failure in the constancy of effort assumption
arising from the absence of within-year catch data. The current methodology
is extended, with efficient Bayesian models developed for each of these demographic
parameters that account for both of these data nuances, and from which
reliable and usefully precise estimates are obtained.
Of increasing interest is the relationship between abundance and the underlying
vital rates, an understanding of which facilitates effective conservation.
CES data are particularly amenable to an integrated approach to population
modelling, providing a combination of demographic information from a single
source. Such an integrated approach is developed here, employing Bayesian
methodology and a simple population model to unite abundance, productivity
and survival within a consistent framework. Independent data from ring-recoveries
provide additional information on adult and juvenile survival rates.
Specific advantages of this new integrated approach are identified, among which
is the ability to determine juvenile survival accurately, disentangle the probabilities
of survival and permanent emigration, and to obtain estimates of total
seasonal productivity.
The methodologies developed in this thesis are applied to CES data from Sedge
Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.
Fri, 25 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8852010-06-25T00:00:00ZCave, Vanessa M.To underpin and improve advice given to government and other interested parties
on the state of Britain’s common songbird populations, new models for
analysing ecological data are developed in this thesis. These models use data
from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme,
an annual bird-ringing programme in which catch effort is standardised. Data
from the CES scheme are routinely used to index abundance and productivity,
and to a lesser extent estimate adult survival rates. However, two features of
the CES data that complicate analysis were previously inadequately addressed,
namely the presence in the catch of “transient” birds not associated with the
local population, and the sporadic failure in the constancy of effort assumption
arising from the absence of within-year catch data. The current methodology
is extended, with efficient Bayesian models developed for each of these demographic
parameters that account for both of these data nuances, and from which
reliable and usefully precise estimates are obtained.
Of increasing interest is the relationship between abundance and the underlying
vital rates, an understanding of which facilitates effective conservation.
CES data are particularly amenable to an integrated approach to population
modelling, providing a combination of demographic information from a single
source. Such an integrated approach is developed here, employing Bayesian
methodology and a simple population model to unite abundance, productivity
and survival within a consistent framework. Independent data from ring-recoveries
provide additional information on adult and juvenile survival rates.
Specific advantages of this new integrated approach are identified, among which
is the ability to determine juvenile survival accurately, disentangle the probabilities
of survival and permanent emigration, and to obtain estimates of total
seasonal productivity.
The methodologies developed in this thesis are applied to CES data from Sedge
Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.Topics in estimation of quantum channels
http://hdl.handle.net/10023/869
A quantum channel is a mapping which sends density matrices to density
matrices. The estimation of quantum channels is of great importance to the
field of quantum information. In this thesis two topics related to estimation
of quantum channels are investigated. The first of these is the upper
bound of Sarovar and Milburn (2006) on the Fisher information obtainable
by measuring the output of a channel. Two questions raised by Sarovar and
Milburn about their bound are answered. A Riemannian metric on the space
of quantum states is introduced, related to the construction of the Sarovar
and Milburn bound. Its properties are characterized.
The second topic investigated is the estimation of unitary channels. The
situation is considered in which an experimenter has several non-identical
unitary channels that have the same parameter. It is shown that it is possible
to improve estimation using the channels together, analogous to the case of
identical unitary channels. Also, a new method of phase estimation is given
based on a method sketched by Kitaev (1996). Unlike other phase estimation
procedures which perform similarly, this procedure requires only very basic
experimental resources.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8692010-06-23T00:00:00ZO'Loan, Caleb J.A quantum channel is a mapping which sends density matrices to density
matrices. The estimation of quantum channels is of great importance to the
field of quantum information. In this thesis two topics related to estimation
of quantum channels are investigated. The first of these is the upper
bound of Sarovar and Milburn (2006) on the Fisher information obtainable
by measuring the output of a channel. Two questions raised by Sarovar and
Milburn about their bound are answered. A Riemannian metric on the space
of quantum states is introduced, related to the construction of the Sarovar
and Milburn bound. Its properties are characterized.
The second topic investigated is the estimation of unitary channels. The
situation is considered in which an experimenter has several non-identical
unitary channels that have the same parameter. It is shown that it is possible
to improve estimation using the channels together, analogous to the case of
identical unitary channels. Also, a new method of phase estimation is given
based on a method sketched by Kitaev (1996). Unlike other phase estimation
procedures which perform similarly, this procedure requires only very basic
experimental resources.Multi-species state-space modelling of the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) in Scotland
http://hdl.handle.net/10023/864
State-space modelling is a powerful tool to study ecological systems. The direct inclusion of uncertainty, unification of models and data, and ability to model unobserved, hidden states increases our knowledge about the environment and provides
new ecological insights. I extend the state-space framework to create multi-species
models, showing that the ability to model ecosystem interactions is limited only by data availability. State-space models are fit using both Bayesian and Frequentist methods, making them independent of a statistical school of thought. Bayesian approaches can have the advantage in their ability to account for missing data and fit hierarchical structures
and models with many parameters to limited data; often the case in ecological studies.
I have taken a Bayesian model fitting approach in this thesis.
The predator-prey interactions between the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) are used to demonstrate state-space modelling’s
capabilities. The harrier data are believed to be known without error, while missing
data make the cyclic dynamics of the grouse harder to model. The grouse-harrier interactions are modelled in a multi-species state-space model, rather than including
one species as a covariate in the other’s model. Finally, models are included for the
harriers’ alternate prey.
The single- and multi-species state-space models for the predator-prey interactions
provide insight into the species’ management. The models investigate aspects of the species’ behaviour, from the mechanisms behind grouse cycles to what motivates harrier immigration. The inferences drawn from these models are applicable to management, suggesting actions to halt grouse cycles or mitigate the grouse-harrier conflict. Overall, the multi-species models suggest that two popular ideas for grouse-harrier management, diversionary feeding and habitat manipulation to reduce alternate prey densities, will not have the desired effect, and in the case of reducing prey densities, may even increase the harriers’ impact on grouse chicks.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8642010-06-23T00:00:00ZNew, Leslie FrancesState-space modelling is a powerful tool to study ecological systems. The direct inclusion of uncertainty, unification of models and data, and ability to model unobserved, hidden states increases our knowledge about the environment and provides
new ecological insights. I extend the state-space framework to create multi-species
models, showing that the ability to model ecosystem interactions is limited only by data availability. State-space models are fit using both Bayesian and Frequentist methods, making them independent of a statistical school of thought. Bayesian approaches can have the advantage in their ability to account for missing data and fit hierarchical structures
and models with many parameters to limited data; often the case in ecological studies.
I have taken a Bayesian model fitting approach in this thesis.
The predator-prey interactions between the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) are used to demonstrate state-space modelling’s
capabilities. The harrier data are believed to be known without error, while missing
data make the cyclic dynamics of the grouse harder to model. The grouse-harrier interactions are modelled in a multi-species state-space model, rather than including
one species as a covariate in the other’s model. Finally, models are included for the
harriers’ alternate prey.
The single- and multi-species state-space models for the predator-prey interactions
provide insight into the species’ management. The models investigate aspects of the species’ behaviour, from the mechanisms behind grouse cycles to what motivates harrier immigration. The inferences drawn from these models are applicable to management, suggesting actions to halt grouse cycles or mitigate the grouse-harrier conflict. Overall, the multi-species models suggest that two popular ideas for grouse-harrier management, diversionary feeding and habitat manipulation to reduce alternate prey densities, will not have the desired effect, and in the case of reducing prey densities, may even increase the harriers’ impact on grouse chicks.Distance software: design and analysis of distance sampling surveys for estimating population size
http://hdl.handle.net/10023/817
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial pre-requisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: CDS (conventional distance sampling), which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; MCDS (multiple covariate distance sampling), which allows covariates in addition to distance; and MRDS (mark-recapture distance sampling), which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the DSM (density surface modelling) analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/8172010-01-01T00:00:00ZThomas, LenBuckland, Stephen TerrenceRexstad, EricLaake, J LStrindberg, SHedley, S LBishop, J R BMarques, Tiago A.1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial pre-requisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: CDS (conventional distance sampling), which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; MCDS (multiple covariate distance sampling), which allows covariates in addition to distance; and MRDS (mark-recapture distance sampling), which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the DSM (density surface modelling) analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.Embedding population dynamics in mark-recapture models
http://hdl.handle.net/10023/718
Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states.
Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods.
Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period.
An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods.
Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.
Wed, 24 Jun 2009 00:00:00 GMThttp://hdl.handle.net/10023/7182009-06-24T00:00:00ZBishop, Jonathan R. B.Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states.
Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods.
Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period.
An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods.
Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.The importance of analysis method for breeding bird survey population trend estimates
http://hdl.handle.net/10023/685
Population trends from the Breeding Bird Survey are widely used to focus conservation efforts on species thought to be in decline and to test preliminary hypotheses regarding the causes of these declines. A number of statistical methods have been used to estimate population trends, but there is no consensus us to which is the most reliable. We quantified differences in trend estimates or different analysis methods applied to the same subset of Breeding Bird Survey data. We estimated trends for 115 species in British Columbia using three analysis methods: U.S. National Biological Service route regression, Canadian Wildlife Service route regression, and nonparametric rank-trends analysis. Overall, the number of species estimated to be declining was similar among the three methods, but the number of statistically significant declines was not similar (15, 8, and 29 respectively). In addition, many differences existed among methods in the trend estimates assigned to individual species. Comparing the two route regression methods, Canadian Wildlife Service estimates had a greater absolute magnitude on average than those of the U.S. National Biological Service method. U.S. National Biological Service estimates were on average more positive than the Canadian Wildlife Service estimates when the respective agency's data selection criteria were applied separately. These results imply that our ability to detect population declines and to prioritize species of conservation concern depend strongly upon the analysis method used. This highlights the need for further research to determine how best to accurately estimate trends from the data. We suggest a method for evaluating the performance of the analysis methods by using simulated Breeding Bird Survey data.
Mon, 01 Jan 1996 00:00:00 GMThttp://hdl.handle.net/10023/6851996-01-01T00:00:00ZThomas, LenMartin, KathyPopulation trends from the Breeding Bird Survey are widely used to focus conservation efforts on species thought to be in decline and to test preliminary hypotheses regarding the causes of these declines. A number of statistical methods have been used to estimate population trends, but there is no consensus us to which is the most reliable. We quantified differences in trend estimates or different analysis methods applied to the same subset of Breeding Bird Survey data. We estimated trends for 115 species in British Columbia using three analysis methods: U.S. National Biological Service route regression, Canadian Wildlife Service route regression, and nonparametric rank-trends analysis. Overall, the number of species estimated to be declining was similar among the three methods, but the number of statistically significant declines was not similar (15, 8, and 29 respectively). In addition, many differences existed among methods in the trend estimates assigned to individual species. Comparing the two route regression methods, Canadian Wildlife Service estimates had a greater absolute magnitude on average than those of the U.S. National Biological Service method. U.S. National Biological Service estimates were on average more positive than the Canadian Wildlife Service estimates when the respective agency's data selection criteria were applied separately. These results imply that our ability to detect population declines and to prioritize species of conservation concern depend strongly upon the analysis method used. This highlights the need for further research to determine how best to accurately estimate trends from the data. We suggest a method for evaluating the performance of the analysis methods by using simulated Breeding Bird Survey data.Retrospective power analysis
http://hdl.handle.net/10023/679
Many papers have appeared in the recent biological literature encouraging us to incorporate statistical power analysis into our hypothesis testing protocol (Peterman 1990; Fairweather 1991; Muller & Benignus 1992; Taylor & Gerrodette 1993; Searcy-Bernal 1994; Thomas & Juanes 1996). The importance of doing a power analysis before beginning a study (prospective power analysis) is universally accepted: such analyses help us to decide how many samples are required to have a good chance of getting unambiguous results. In contrast, the role of power analysis after the data are collected and analyzed (retrospective power analysis) is controversial, as is evidenced by the papers of Reed and Blaustein (1995) and Hayes and Steidl (1997). The controversy is over the use of information from the sample data in retrospective power calculations. As I will show, the type of information used has fundamental implications for the value of such analyses. I compare the approaches to calculating retrospective power, noting the strengths and weaknesses of each, and make general recommendations as to how and when retrospective power analyses should be conducted.
The pdf contains the article; the ASCII file contains SAS code to calculate power and confidence limits for simple linear regression, as detailed in the article appendix.
Wed, 01 Jan 1997 00:00:00 GMThttp://hdl.handle.net/10023/6791997-01-01T00:00:00ZThomas, LenMany papers have appeared in the recent biological literature encouraging us to incorporate statistical power analysis into our hypothesis testing protocol (Peterman 1990; Fairweather 1991; Muller & Benignus 1992; Taylor & Gerrodette 1993; Searcy-Bernal 1994; Thomas & Juanes 1996). The importance of doing a power analysis before beginning a study (prospective power analysis) is universally accepted: such analyses help us to decide how many samples are required to have a good chance of getting unambiguous results. In contrast, the role of power analysis after the data are collected and analyzed (retrospective power analysis) is controversial, as is evidenced by the papers of Reed and Blaustein (1995) and Hayes and Steidl (1997). The controversy is over the use of information from the sample data in retrospective power calculations. As I will show, the type of information used has fundamental implications for the value of such analyses. I compare the approaches to calculating retrospective power, noting the strengths and weaknesses of each, and make general recommendations as to how and when retrospective power analyses should be conducted.A unified framework for modelling wildlife population dynamics
http://hdl.handle.net/10023/678
This paper proposes a unified framework for defining and fitting stochastic, discrete-time, discrete-stage population dynamics models. The biological system is described by a state–space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British Grey Seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.
The pdf document contains the full article text; program code (in S-PLUS 6.1) for the example analysis is in the three text files; data is available from the Sea Mammal Research Unit (http://www.smru.st-and.ac.uk)
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10023/6782005-01-01T00:00:00ZThomas, LenBuckland, Stephen T.Newman, KBHarwood, JohnThis paper proposes a unified framework for defining and fitting stochastic, discrete-time, discrete-stage population dynamics models. The biological system is described by a state–space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British Grey Seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.WinBUGS for population ecologists: Bayesian modeling using Markov Chain Monte Carlo methods.
http://hdl.handle.net/10023/677
The computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden Württemberg, Germany, to demonstrate the use of mark-recapture models to estimate survival, and also how to cope with unexplained variance through random effects. Recent advances in methodology and also the WinBUGS software allow us to introduce (i) a flexible way of incorporating covariates using spline smoothing and (ii) a method to deal with missing values in covariates. The second example shows how to estimate population density while accounting for detectability, using distance sampling methods applied to a test dataset collected on a known population of wooden stakes. Finally, the third case study involves the use of state-space models of wildlife population dynamics to make inferences about density dependence in a North American duck species. Reversible Jump MCMC is used to calculate the probability of various candidate models. For all examples, data and WinBUGS code are provided.
This paper was presented at the EURING 2007 Technical Meeting, January 14-21, Dunedin, New Zealand. It has been submitted for publication in the conference proceedings, which will appear as a special issue of Environmental and Ecological Statistics.; The zip file contains accompanying code in WinBUGS
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/6772008-01-01T00:00:00ZGiminez, OBonner, S JKing, RuthParker, R ABrooks, S PJamieson, L EGrosbois, VMorgan, B J TThomas, LenThe computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden Württemberg, Germany, to demonstrate the use of mark-recapture models to estimate survival, and also how to cope with unexplained variance through random effects. Recent advances in methodology and also the WinBUGS software allow us to introduce (i) a flexible way of incorporating covariates using spline smoothing and (ii) a method to deal with missing values in covariates. The second example shows how to estimate population density while accounting for detectability, using distance sampling methods applied to a test dataset collected on a known population of wooden stakes. Finally, the third case study involves the use of state-space models of wildlife population dynamics to make inferences about density dependence in a North American duck species. Reversible Jump MCMC is used to calculate the probability of various candidate models. For all examples, data and WinBUGS code are provided.Density estimation and time trend analysis of large herbivores in Nagarhole, India
http://hdl.handle.net/10023/669
Density estimates for six large herbivore species were obtained through
analysis of line transect data from Nagarhole National Park, south-western India,
collected between 1989 and 2000. These species were Chital (Axis axis), Sambar
(Cervus unicolor), Gaur (Bos gaurus), Wild Pig (Sus scrofa), Muntjac (Muntiacus
muntjak) and Asian Elephant (Elephas maximus). Multiple Covariate Distance
Sampling (MCDS) models were used to derive these density estimates. The distance
histograms showed a relatively large spike at zero, which can lead to problems when
fitting MCDS models. The effects of this spike were investigated and remedied by
forward truncation. Density estimates from unmodified dataset were 10-15% higher
than estimates from the forward truncated data, with this going up to 37% for
Muntjac. These could possibly be over estimates. Empirical trend models were then
fit to the density estimates. Overall trends were stable, though there were intra-habitat
differences in trends for some species. The trends were similar both in cases where
forward truncation was done as well as in those where they were not.
MRes in Environmental Biology
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10023/6692005-01-01T00:00:00ZGangadharan, AdityaDensity estimates for six large herbivore species were obtained through
analysis of line transect data from Nagarhole National Park, south-western India,
collected between 1989 and 2000. These species were Chital (Axis axis), Sambar
(Cervus unicolor), Gaur (Bos gaurus), Wild Pig (Sus scrofa), Muntjac (Muntiacus
muntjak) and Asian Elephant (Elephas maximus). Multiple Covariate Distance
Sampling (MCDS) models were used to derive these density estimates. The distance
histograms showed a relatively large spike at zero, which can lead to problems when
fitting MCDS models. The effects of this spike were investigated and remedied by
forward truncation. Density estimates from unmodified dataset were 10-15% higher
than estimates from the forward truncated data, with this going up to 37% for
Muntjac. These could possibly be over estimates. Empirical trend models were then
fit to the density estimates. Overall trends were stable, though there were intra-habitat
differences in trends for some species. The trends were similar both in cases where
forward truncation was done as well as in those where they were not.Models of random wildlife movement with an application to distance sampling
http://hdl.handle.net/10023/668
In this paper we present three models of random wildlife movement: a one dimensional model of wildlife-observer encounters on roads, an analogous two dimensional model, and an further two-dimensional model that borrows from the ideas of statistical mechanics. We then derive unbiased estimates of wildlife density in terms of encounters for each of these models. By extending these results to incorporate uncertain detection, we suggest three novel distance sampling methods and briefly consider possible field applications.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10023/6682007-01-01T00:00:00ZDiTraglia, Francis J.In this paper we present three models of random wildlife movement: a one dimensional model of wildlife-observer encounters on roads, an analogous two dimensional model, and an further two-dimensional model that borrows from the ideas of statistical mechanics. We then derive unbiased estimates of wildlife density in terms of encounters for each of these models. By extending these results to incorporate uncertain detection, we suggest three novel distance sampling methods and briefly consider possible field applications.Designing a shipboard line transect survey to estimate cetacean abundance off the Azores Archipelago, Portugal
http://hdl.handle.net/10023/667
Management schemes dedicated to the conservation of wildlife populations rely on the effective monitoring of population size, and this requires the accurate and precise estimation of abundance. The accuracy and precision of estimates are determined to a large extent by the survey design. Line transect surveys are commonly applied to wildlife population assessments in which the primary purpose of a survey design is to ensure that the critical distance sampling assumptions are met.
Little information is available regarding cetacean abundance in the Archipelago of the Azores (Portugal). This study aims to design a line transect shipboard survey that allows the collection of data required to provide abundance estimates for such species. Several aspects must be taken into consideration when designing a survey to estimate cetacean abundance. This is an iterative process, and there is a constant trade off between the logistic constraints and the desired statistical robustness. Information on this process is provided to aid policy makers and environmental managers, such as the criteria used for the choices made when defining the elements of a survey design.
Three survey effort scenarios are provided to illustrate the range of possibilities between statistical robustness and logistic/ management restrictions. A survey is designed for the more economical scenario (L=5000Km), although the second scenario is the one recommended to be implemented (L=17,600Km) given it provides robust estimates of
abundance (CV<=0.2).
Revised version November 2008. MRes in Marine Mammal Science
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/6672008-01-01T00:00:00ZFaustino, Cláudia Estevinho SantosManagement schemes dedicated to the conservation of wildlife populations rely on the effective monitoring of population size, and this requires the accurate and precise estimation of abundance. The accuracy and precision of estimates are determined to a large extent by the survey design. Line transect surveys are commonly applied to wildlife population assessments in which the primary purpose of a survey design is to ensure that the critical distance sampling assumptions are met.
Little information is available regarding cetacean abundance in the Archipelago of the Azores (Portugal). This study aims to design a line transect shipboard survey that allows the collection of data required to provide abundance estimates for such species. Several aspects must be taken into consideration when designing a survey to estimate cetacean abundance. This is an iterative process, and there is a constant trade off between the logistic constraints and the desired statistical robustness. Information on this process is provided to aid policy makers and environmental managers, such as the criteria used for the choices made when defining the elements of a survey design.
Three survey effort scenarios are provided to illustrate the range of possibilities between statistical robustness and logistic/ management restrictions. A survey is designed for the more economical scenario (L=5000Km), although the second scenario is the one recommended to be implemented (L=17,600Km) given it provides robust estimates of
abundance (CV<=0.2).Behavioural changes of a long-ranging diver in response to oceanographic conditions
http://hdl.handle.net/10023/665
The development of an animal-borne instrument that can record oceanographic measurements (CTD-SRDL) has enabled the collection of oceanographic data at a scale relevant to the counterpart behavioural data, both in time and 3-dimensional space. This has advanced the potential for studies of the behaviour of deep-diving marine animals and the way in which they respond to their environment, yet the nature of the data delivered by CTD-SRDLs presents substantial analytical challenges and places constraints on its biological interpretation. Behavioural and environmental data, collected using CTD-SRDLs deployed on southern elephant seals (Mirounga leonina) from the South Georgia subpopulation in 2004 and 2005, are analysed for 13 females and 4 males (21,015 dives). Compressed dive profiles are used to classify individual dives into six distinct types based on their 2-dimensional time-depth characteristics using random forest classification. The relationship between dive type and environmental variables, derived from oceanographic data recorded on board the animals, is investigated in the context of regression analysis, employing a multinomial model, as well as independently fitted Generalized Linear Models (GLM) and Generalized Additive Models (GAM) for each dive type. Regression is not found to be an appropriate method for analysing abstracted behavioural dive data, and other methods are suggested. We show that functional specializations can be manifested within a dive type, using square bottom dives (SQ) as an example. The usefulness of dive classification is discussed in the context of behavioural interpretation, and validity of the ecological functions attached to each class. Preliminary analyses are important drivers of further research into improving the interpretability of abstracted behavioural data, and developing efficient, standardized methods for widespread application to this type of data, which is obtained in abundance via satellite telemetry.
BL 5019 Research project. MRes Environmental Biology
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10023/6652007-01-01T00:00:00ZPhotopoulos, TheoniThe development of an animal-borne instrument that can record oceanographic measurements (CTD-SRDL) has enabled the collection of oceanographic data at a scale relevant to the counterpart behavioural data, both in time and 3-dimensional space. This has advanced the potential for studies of the behaviour of deep-diving marine animals and the way in which they respond to their environment, yet the nature of the data delivered by CTD-SRDLs presents substantial analytical challenges and places constraints on its biological interpretation. Behavioural and environmental data, collected using CTD-SRDLs deployed on southern elephant seals (Mirounga leonina) from the South Georgia subpopulation in 2004 and 2005, are analysed for 13 females and 4 males (21,015 dives). Compressed dive profiles are used to classify individual dives into six distinct types based on their 2-dimensional time-depth characteristics using random forest classification. The relationship between dive type and environmental variables, derived from oceanographic data recorded on board the animals, is investigated in the context of regression analysis, employing a multinomial model, as well as independently fitted Generalized Linear Models (GLM) and Generalized Additive Models (GAM) for each dive type. Regression is not found to be an appropriate method for analysing abstracted behavioural dive data, and other methods are suggested. We show that functional specializations can be manifested within a dive type, using square bottom dives (SQ) as an example. The usefulness of dive classification is discussed in the context of behavioural interpretation, and validity of the ecological functions attached to each class. Preliminary analyses are important drivers of further research into improving the interpretability of abstracted behavioural data, and developing efficient, standardized methods for widespread application to this type of data, which is obtained in abundance via satellite telemetry.Using generalized estimating equations with regression splines to improve analysis of butterfly transect data
http://hdl.handle.net/10023/488
Surveying animal populations is an important aspect of wildlife
management. Distinguishing trend from random fluctuations and
quantifying trend are key goals in any analysis.
The aim of this thesis is to review analyses of Butterfly Monitoring
Survey (BMS) data and to develop new methods which address some
flaws in previous studies. The BMS was established in 1976 at Monks
Wood, Cambridgeshire and sites were added over time throughout
Britain in order to monitor butterfly population trends. Weekly
counts are made over the monitoring season and the main aims are to
produce annual indices and compare these indices over time for any
particular species.
Originally, weekly counts were summed to produce relative indices
and missing counts were estimated using linear interpolation. This
thesis discusses the weaknesses of this basic method
and suggests possible improvements.
In recent years, with advancements in statistical methods and
increased computer power, new methods can be applied to accommodate
the longitudinal and flexible nature of ecological data.
Mixed Models, Generalized Estimating Equations and Generalized
Additive Models are used and the relative merits of each modelling
approach discussed. These methods allow for correlation and
non-linearity in data.
Model selection is an important consideration when modelling and
different tests are introduced and compared.
Once a model is selected, site-level indices are estimated, which
can be collated to produce regional and national indices. Different
methods of estimating precision around indices are also contrasted.
Bootstrapping is found to be a convenient and dependable approach.
Abundance is difficult to disentangle from detectability when only
counts of species are carried out. Methods for dealing with this
problem are suggested.
Once reliable annual abundance estimates are found, they can be
compared over time using a variety of statistical techniques. The
chain-ratio method is applied to a subset of real data.
Sun, 01 Jun 2008 00:00:00 GMThttp://hdl.handle.net/10023/4882008-06-01T00:00:00ZBrewer, CiaraSurveying animal populations is an important aspect of wildlife
management. Distinguishing trend from random fluctuations and
quantifying trend are key goals in any analysis.
The aim of this thesis is to review analyses of Butterfly Monitoring
Survey (BMS) data and to develop new methods which address some
flaws in previous studies. The BMS was established in 1976 at Monks
Wood, Cambridgeshire and sites were added over time throughout
Britain in order to monitor butterfly population trends. Weekly
counts are made over the monitoring season and the main aims are to
produce annual indices and compare these indices over time for any
particular species.
Originally, weekly counts were summed to produce relative indices
and missing counts were estimated using linear interpolation. This
thesis discusses the weaknesses of this basic method
and suggests possible improvements.
In recent years, with advancements in statistical methods and
increased computer power, new methods can be applied to accommodate
the longitudinal and flexible nature of ecological data.
Mixed Models, Generalized Estimating Equations and Generalized
Additive Models are used and the relative merits of each modelling
approach discussed. These methods allow for correlation and
non-linearity in data.
Model selection is an important consideration when modelling and
different tests are introduced and compared.
Once a model is selected, site-level indices are estimated, which
can be collated to produce regional and national indices. Different
methods of estimating precision around indices are also contrasted.
Bootstrapping is found to be a convenient and dependable approach.
Abundance is difficult to disentangle from detectability when only
counts of species are carried out. Methods for dealing with this
problem are suggested.
Once reliable annual abundance estimates are found, they can be
compared over time using a variety of statistical techniques. The
chain-ratio method is applied to a subset of real data.Incorporating measurement error and density gradients in distance sampling surveys
http://hdl.handle.net/10023/391
Distance sampling is one of the most commonly used methods for estimating density
and abundance. Conventional methods are based on the distances of detected animals
from the center of point transects or the center line of line transects. These distances
are used to model a detection function: the probability of detecting an animal, given
its distance from the line or point. The probability of detecting an animal in the
covered area is given by the mean value of the detection function with respect to
the available distances to be detected. Given this probability, a Horvitz-Thompson-
like estimator of abundance for the covered area follows, hence using a model-based
framework. Inferences for the wider survey region are justified using the survey design.
Conventional distance sampling methods are based on a set of assumptions. In
this thesis I present results that extend distance sampling on two fronts.
Firstly, estimators are derived for situations in which there is measurement error in
the distances. These estimators use information about the measurement error in two
ways: (1) a biased estimator based on the contaminated distances is multiplied by an
appropriate correction factor, which is a function of the errors (PDF approach), and
(2) cast into a likelihood framework that allows parameter estimation in the presence
of measurement error (likelihood approach).
Secondly, methods are developed that relax the conventional assumption that the
distribution of animals is independent of distance from the lines or points (usually
guaranteed by appropriate survey design). In particular, the new methods deal with
the case where animal density gradients are caused by the use of non-random sampler
allocation, for example transects placed along linear features such as roads or streams.
This is dealt with separately for line and point transects, and at a later stage an
approach for combining the two is presented.
A considerable number of simulations and example analysis illustrate the performance of the proposed methods.
Thu, 01 Nov 2007 00:00:00 GMThttp://hdl.handle.net/10023/3912007-11-01T00:00:00ZMarques, Tiago Andre Lamas OliveiraDistance sampling is one of the most commonly used methods for estimating density
and abundance. Conventional methods are based on the distances of detected animals
from the center of point transects or the center line of line transects. These distances
are used to model a detection function: the probability of detecting an animal, given
its distance from the line or point. The probability of detecting an animal in the
covered area is given by the mean value of the detection function with respect to
the available distances to be detected. Given this probability, a Horvitz-Thompson-
like estimator of abundance for the covered area follows, hence using a model-based
framework. Inferences for the wider survey region are justified using the survey design.
Conventional distance sampling methods are based on a set of assumptions. In
this thesis I present results that extend distance sampling on two fronts.
Firstly, estimators are derived for situations in which there is measurement error in
the distances. These estimators use information about the measurement error in two
ways: (1) a biased estimator based on the contaminated distances is multiplied by an
appropriate correction factor, which is a function of the errors (PDF approach), and
(2) cast into a likelihood framework that allows parameter estimation in the presence
of measurement error (likelihood approach).
Secondly, methods are developed that relax the conventional assumption that the
distribution of animals is independent of distance from the lines or points (usually
guaranteed by appropriate survey design). In particular, the new methods deal with
the case where animal density gradients are caused by the use of non-random sampler
allocation, for example transects placed along linear features such as roads or streams.
This is dealt with separately for line and point transects, and at a later stage an
approach for combining the two is presented.
A considerable number of simulations and example analysis illustrate the performance of the proposed methods.A Bayesian approach to modelling field data on multi-species predator prey-interactions
http://hdl.handle.net/10023/174
Multi-species functional response models are required to model the predation of generalist preda-
tors, which consume more than one prey species. In chapter 2, a new model for the multi-species
functional response is presented. This model can describe generalist predators that exhibit func-
tional responses of Holling type II to some of their prey and of type III to other prey. In chapter
3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and
show how Bayesian statistics are particularly well-suited for the fitting of functional response
models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi-
species functional response model is fitted to field data on two generalist predators: the hen
harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous
Bayesian model of the multi-species functional response that has been fitted to field data.
The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities
of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to
the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The
impact of using Bayesian or frequentist models on the resulting functional response is discussed.
In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible
reasons are discussed, including poor data quality or a lack of relevance of the available data for
informing a behavioural functional response model.
I conclude with a comparison of the role that functional responses play in behavioural, population
and community ecology and emphasise the need for further research into unifying these different
approaches to understanding predation with particular reference to predator movement.
In an appendix, I evaluate the possibility of using a functional response for inferring the abun-
dances of prey species from performance indicators of generalist predators feeding on these prey.
I argue that this approach may be futile in general, because a generalist predator's energy intake
does not depend on the density of any single of its prey, so that the possibly unknown densities
of all prey need to be taken into account.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10023/1742006-01-01T00:00:00ZAsseburg, ChristianMulti-species functional response models are required to model the predation of generalist preda-
tors, which consume more than one prey species. In chapter 2, a new model for the multi-species
functional response is presented. This model can describe generalist predators that exhibit func-
tional responses of Holling type II to some of their prey and of type III to other prey. In chapter
3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and
show how Bayesian statistics are particularly well-suited for the fitting of functional response
models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi-
species functional response model is fitted to field data on two generalist predators: the hen
harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous
Bayesian model of the multi-species functional response that has been fitted to field data.
The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities
of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to
the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The
impact of using Bayesian or frequentist models on the resulting functional response is discussed.
In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible
reasons are discussed, including poor data quality or a lack of relevance of the available data for
informing a behavioural functional response model.
I conclude with a comparison of the role that functional responses play in behavioural, population
and community ecology and emphasise the need for further research into unifying these different
approaches to understanding predation with particular reference to predator movement.
In an appendix, I evaluate the possibility of using a functional response for inferring the abun-
dances of prey species from performance indicators of generalist predators feeding on these prey.
I argue that this approach may be futile in general, because a generalist predator's energy intake
does not depend on the density of any single of its prey, so that the possibly unknown densities
of all prey need to be taken into account.Reconstruction of foliations from directional information
http://hdl.handle.net/10023/158
In many areas of science, especially geophysics, geography and
meteorology, the data are often directions or axes rather than
scalars or unrestricted vectors. Directional statistics considers
data which are mainly unit vectors lying in two- or
three-dimensional space (R² or R³). One
way in which directional data arise is as normals to foliations. A
(codimension-1) foliation of {R}^{d} is a system
of non-intersecting (d-1)-dimensional surfaces filling out the
whole of {R}^{d}. At each point z of {R}^{d}, any given codimension-1 foliation determines a
unit vector v normal to the surface through z.
The problem considered here is that of reconstructing the foliation
from observations ({z}{i}, {v}{i}), i=1,...,n. One
way of doing this is rather similar to fitting smooth splines to
data. That is, the reconstructed foliation has to be as close to the
data as possible, while the foliation itself is not too rough. A
tradeoff parameter is introduced to control the balance between
smoothness and
closeness. The approach used in this thesis is to take the surfaces to be
surfaces of constant values of a suitable real-valued function h
on {R}^{d}. The problem of reconstructing a foliation is
translated into the language of Schwartz distributions and a deep
result in the theory of distributions is used to give the
appropriate general form of the fitted function h. The model
parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals
are developed and estimates of concentration are given.
Fri, 01 Jun 2007 00:00:00 GMThttp://hdl.handle.net/10023/1582007-06-01T00:00:00ZYeh, Shu-YingIn many areas of science, especially geophysics, geography and
meteorology, the data are often directions or axes rather than
scalars or unrestricted vectors. Directional statistics considers
data which are mainly unit vectors lying in two- or
three-dimensional space (R² or R³). One
way in which directional data arise is as normals to foliations. A
(codimension-1) foliation of {R}^{d} is a system
of non-intersecting (d-1)-dimensional surfaces filling out the
whole of {R}^{d}. At each point z of {R}^{d}, any given codimension-1 foliation determines a
unit vector v normal to the surface through z.
The problem considered here is that of reconstructing the foliation
from observations ({z}{i}, {v}{i}), i=1,...,n. One
way of doing this is rather similar to fitting smooth splines to
data. That is, the reconstructed foliation has to be as close to the
data as possible, while the foliation itself is not too rough. A
tradeoff parameter is introduced to control the balance between
smoothness and
closeness. The approach used in this thesis is to take the surfaces to be
surfaces of constant values of a suitable real-valued function h
on {R}^{d}. The problem of reconstructing a foliation is
translated into the language of Schwartz distributions and a deep
result in the theory of distributions is used to give the
appropriate general form of the fitted function h. The model
parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals
are developed and estimates of concentration are given.