Statistics
http://hdl.handle.net/10023/95
Tue, 23 Jul 2019 15:20:13 GMT2019-07-23T15:20:13ZStatisticshttps://research-repository.st-andrews.ac.uk:443/bitstream/id/dd6a3f38-2d52-4515-927d-3c70e82761a1/Mathematics and statistics.gif
http://hdl.handle.net/10023/95
Loggerhead turtle Caretta caretta density and abundance in Chesapeake Bay and the temperate ocean waters of the southern portion of the Mid-Atlantic Bight
http://hdl.handle.net/10023/16833
We conducted aerial surveys of sea turtles in 2011 and 2012, incorporating corrections for perception and availability bias in Chesapeake Bay and near-shore continental shelf waters of the Mid-Atlantic Bight off the US states of Virginia and Maryland. Results of these surveys and ancillary research to determine surface times for loggerhead turtles provide us with a new baseline population estimate for turtles in the region. Prior surveys were conducted in Chesapeake Bay in the mid-1980s and early 2000s, and in ocean waters in the late 1970s and early 1980s. Although comparison of density estimates not corrected for availability between prior surveys and this effort suggests that the population of sea turtles, especially loggerhead turtles, is higher than previous estimates, differences between surveys may be the result of survey methodologies and cannot be assumed to be true changes in density. Surface time for availability corrections was calculated using dive summaries from satellite telemetry on 27 loggerhead turtles tracked between 2011 and 2015. We calculated stratified seasonal availability corrections for bay and ocean waters based on assumed differences in turtle behavior and water clarity between the 2 habitats. For each habitat, we provided seasonal corrections for 3 detection depth bins (shallow, moderate, and deep) to account for differences in sub-surface detection ranges. Differences and trends toward differences among availability corrections underscore the need to better understand the many variables that affect surface time for sea turtles in temperate waters, and the effect that availability has on abundance and density estimates.
Funding was provided by the NOAA Species Recovery Grants to States program (Award #NA 47200033) issued to the Virginia Department of Game and Inland Fisheries which contracted with the Virginia Aquarium & Marine Science Center Foundation. Additional funding for tags and turtle capture was also provided by US Fleet Forces Command as well as the Virginia Aquarium Batten Collaborative Research Fund and Batten Professional Development Fund.
Thu, 13 Dec 2018 00:00:00 GMThttp://hdl.handle.net/10023/168332018-12-13T00:00:00ZBarco, Susan G.Burt, M. LouiseDiGiovanni, Robert A.Swingle, W. MarkWilliard, Amanda S.We conducted aerial surveys of sea turtles in 2011 and 2012, incorporating corrections for perception and availability bias in Chesapeake Bay and near-shore continental shelf waters of the Mid-Atlantic Bight off the US states of Virginia and Maryland. Results of these surveys and ancillary research to determine surface times for loggerhead turtles provide us with a new baseline population estimate for turtles in the region. Prior surveys were conducted in Chesapeake Bay in the mid-1980s and early 2000s, and in ocean waters in the late 1970s and early 1980s. Although comparison of density estimates not corrected for availability between prior surveys and this effort suggests that the population of sea turtles, especially loggerhead turtles, is higher than previous estimates, differences between surveys may be the result of survey methodologies and cannot be assumed to be true changes in density. Surface time for availability corrections was calculated using dive summaries from satellite telemetry on 27 loggerhead turtles tracked between 2011 and 2015. We calculated stratified seasonal availability corrections for bay and ocean waters based on assumed differences in turtle behavior and water clarity between the 2 habitats. For each habitat, we provided seasonal corrections for 3 detection depth bins (shallow, moderate, and deep) to account for differences in sub-surface detection ranges. Differences and trends toward differences among availability corrections underscore the need to better understand the many variables that affect surface time for sea turtles in temperate waters, and the effect that availability has on abundance and density estimates.Estimation of population size when capture probability depends on individual states
http://hdl.handle.net/10023/16735
We develop a multi-state model to estimate the size of a closed population from capture–recapture studies. We consider the case where capture–recapture data are not of a simple binary form, but where the state of an individual is also recorded upon every capture as a discrete variable. The proposed multi-state model can be regarded as a generalisation of the commonly applied set of closed population models to a multi-state form. The model allows for heterogeneity within the capture probabilities associated with each state while also permitting individuals to move between the different discrete states. A closed-form expression for the likelihood is presented in terms of a set of sufficient statistics. The link between existing models for capture heterogeneity is established, and simulation is used to show that the estimate of population size can be biased when movement between states is not accounted for. The proposed unconditional approach is also compared to a conditional approach to assess estimation bias. The model derived in this paper is motivated by a real ecological data set on great crested newts, Triturus cristatus.
Funding: Carnegie Trust for the Universities of Scotland, UK Engineering and Physical Sciences Research Council (EP/10009171/1), UK Natural Environment Research Council (NE/J018473/1)
Fri, 01 Mar 2019 00:00:00 GMThttp://hdl.handle.net/10023/167352019-03-01T00:00:00ZWorthington, HannahMcCrea, RachelKing, RuthGriffiths, RichardWe develop a multi-state model to estimate the size of a closed population from capture–recapture studies. We consider the case where capture–recapture data are not of a simple binary form, but where the state of an individual is also recorded upon every capture as a discrete variable. The proposed multi-state model can be regarded as a generalisation of the commonly applied set of closed population models to a multi-state form. The model allows for heterogeneity within the capture probabilities associated with each state while also permitting individuals to move between the different discrete states. A closed-form expression for the likelihood is presented in terms of a set of sufficient statistics. The link between existing models for capture heterogeneity is established, and simulation is used to show that the estimate of population size can be biased when movement between states is not accounted for. The proposed unconditional approach is also compared to a conditional approach to assess estimation bias. The model derived in this paper is motivated by a real ecological data set on great crested newts, Triturus cristatus.Title redacted
http://hdl.handle.net/10023/16693
Mon, 20 Nov 2017 00:00:00 GMThttp://hdl.handle.net/10023/166932017-11-20T00:00:00ZErichson, N. BenjaminIncorporating animal movement with distance sampling and spatial capture-recapture
http://hdl.handle.net/10023/16467
Distance sampling and spatial capture-recapture are statistical methods to estimate the
number of animals in a wild population based on encounters between these animals and
scientific detectors. Both methods estimate the probability an animal is detected during a
survey, but do not explicitly model animal movement.
The primary challenge is that animal movement in these surveys is unobserved; one must
average over all possible paths each animal could have travelled during the survey. In this
thesis, a general statistical model, with distance sampling and spatial capture-recapture
as special cases, is presented that explicitly incorporates animal movement. An efficient
algorithm to integrate over all possible movement paths, based on quadrature and hidden
Markov modelling, is given to overcome the computational obstacles.
For distance sampling, simulation studies and case studies show that incorporating animal
movement can reduce the bias in estimated abundance found in conventional models and
expand application of distance sampling to surveys that violate the assumption of no animal
movement. For spatial capture-recapture, continuous-time encounter records are used to
make detailed inference on where animals spend their time during the survey. In surveys
conducted in discrete occasions, maximum likelihood models that allow for mobile activity
centres are presented to account for transience, dispersal, and heterogeneous space use.
These methods provide an alternative when animal movement causes bias in standard methods and the opportunity to gain richer inference on how animals move, where they spend
their time, and how they interact.
Thu, 06 Dec 2018 00:00:00 GMThttp://hdl.handle.net/10023/164672018-12-06T00:00:00ZGlennie, RichardDistance sampling and spatial capture-recapture are statistical methods to estimate the
number of animals in a wild population based on encounters between these animals and
scientific detectors. Both methods estimate the probability an animal is detected during a
survey, but do not explicitly model animal movement.
The primary challenge is that animal movement in these surveys is unobserved; one must
average over all possible paths each animal could have travelled during the survey. In this
thesis, a general statistical model, with distance sampling and spatial capture-recapture
as special cases, is presented that explicitly incorporates animal movement. An efficient
algorithm to integrate over all possible movement paths, based on quadrature and hidden
Markov modelling, is given to overcome the computational obstacles.
For distance sampling, simulation studies and case studies show that incorporating animal
movement can reduce the bias in estimated abundance found in conventional models and
expand application of distance sampling to surveys that violate the assumption of no animal
movement. For spatial capture-recapture, continuous-time encounter records are used to
make detailed inference on where animals spend their time during the survey. In surveys
conducted in discrete occasions, maximum likelihood models that allow for mobile activity
centres are presented to account for transience, dispersal, and heterogeneous space use.
These methods provide an alternative when animal movement causes bias in standard methods and the opportunity to gain richer inference on how animals move, where they spend
their time, and how they interact.Surveying abundance and stand type associations of Formica aquilonia and F. lugubris (Hymenoptera: Formicidae) nest mounds over an extensive area : Trialing a novel method
http://hdl.handle.net/10023/16260
Red wood ants are ecologically important members of woodland communities, and some species are of conservation concern. They occur commonly only in certain habitats in Britain, but there is limited knowledge of their numbers and distribution. This study provided baseline information at a key locality (Abernethy Forest, 37 km2) in the central Highlands of Scotland and trialed a new method of surveying red wood ant density and stand type associations: a distance sampling line transect survey of nests. This method is efficient because it allows an observer to quickly survey a large area either side of transect lines, without having to assume that all nests are detected. Instead, data collected on the distance of nests from the line are used to estimate probability of detection and the effective transect width, using the free software "Distance". Surveys took place in August and September 2003 along a total of 71.2 km of parallel, equally-spaced transects. One hundred and forty-four red wood ant nests were located, comprising 89 F. aquilonia (Yarrow, 1955) and 55 F. lugubris (Zetterstedt, 1838) nests. Estimated densities were 1.13 nests per hectare (95% CI 0.74-1.73) for F. aquilonia and 0.83 nests per hectare (95% CI 0.32-2.17) for F. lugubris. These translated to total estimated nest numbers of 4,200 (95% CI 2,700-6,400) and 3,100 (95% CI 1,200-8,100), respectively, for the whole forest. Indices of stand selection indicated that F. aquilonia had some positive association with old-growth and F. lugubris with younger stands (stem exclusion stage). No nests were found in areas that had been clear-felled, and ploughed and planted in the 1970s-1990s. The pattern of stand type association and hence distribution of F. aquilonia and F. lugubris may be due to the differing ability to disperse (F. lugubris is the faster disperser) and compete (F. aquilonia is competitively superior). We recommend using line transect sampling for extensive surveys of ants that construct nest mounds to estimate abundance and stand type association.
Tue, 03 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/162602012-01-03T00:00:00ZBorkin, KerrySummers, RonThomas, LenRed wood ants are ecologically important members of woodland communities, and some species are of conservation concern. They occur commonly only in certain habitats in Britain, but there is limited knowledge of their numbers and distribution. This study provided baseline information at a key locality (Abernethy Forest, 37 km2) in the central Highlands of Scotland and trialed a new method of surveying red wood ant density and stand type associations: a distance sampling line transect survey of nests. This method is efficient because it allows an observer to quickly survey a large area either side of transect lines, without having to assume that all nests are detected. Instead, data collected on the distance of nests from the line are used to estimate probability of detection and the effective transect width, using the free software "Distance". Surveys took place in August and September 2003 along a total of 71.2 km of parallel, equally-spaced transects. One hundred and forty-four red wood ant nests were located, comprising 89 F. aquilonia (Yarrow, 1955) and 55 F. lugubris (Zetterstedt, 1838) nests. Estimated densities were 1.13 nests per hectare (95% CI 0.74-1.73) for F. aquilonia and 0.83 nests per hectare (95% CI 0.32-2.17) for F. lugubris. These translated to total estimated nest numbers of 4,200 (95% CI 2,700-6,400) and 3,100 (95% CI 1,200-8,100), respectively, for the whole forest. Indices of stand selection indicated that F. aquilonia had some positive association with old-growth and F. lugubris with younger stands (stem exclusion stage). No nests were found in areas that had been clear-felled, and ploughed and planted in the 1970s-1990s. The pattern of stand type association and hence distribution of F. aquilonia and F. lugubris may be due to the differing ability to disperse (F. lugubris is the faster disperser) and compete (F. aquilonia is competitively superior). We recommend using line transect sampling for extensive surveys of ants that construct nest mounds to estimate abundance and stand type association.Crambled : a Shiny application to enable intuitive resolution of conflicting cellularity estimates
http://hdl.handle.net/10023/16248
It is now commonplace to investigate tumour samples using whole-genome sequencing, and some commonly performed tasks are the estimation of cellularity (or sample purity), the genome-wide profiling of copy numbers, and the assessment of sub-clonal behaviours. Several tools are available to undertake these tasks, but often give conflicting results - not least because there is often genuine uncertainty due to a lack of model identifiability. Presented here is a tool, "Crambled", that allows for an intuitive visual comparison of the conflicting solutions. Crambled is implemented as a Shiny application within R, and is accompanied by example images from two use cases (one tumour sample with matched normal sequencing, and one standalone cell line example) as well as functions to generate the necessary images from any sequencing data set. Through the use of Crambled, a user may gain insight into why each tool has offered its given solution and combined with a knowledge of the disease being studied can choose between the competing solutions in an informed manner.
Mon, 07 Dec 2015 00:00:00 GMThttp://hdl.handle.net/10023/162482015-12-07T00:00:00ZLynch, AndyIt is now commonplace to investigate tumour samples using whole-genome sequencing, and some commonly performed tasks are the estimation of cellularity (or sample purity), the genome-wide profiling of copy numbers, and the assessment of sub-clonal behaviours. Several tools are available to undertake these tasks, but often give conflicting results - not least because there is often genuine uncertainty due to a lack of model identifiability. Presented here is a tool, "Crambled", that allows for an intuitive visual comparison of the conflicting solutions. Crambled is implemented as a Shiny application within R, and is accompanied by example images from two use cases (one tumour sample with matched normal sequencing, and one standalone cell line example) as well as functions to generate the necessary images from any sequencing data set. Through the use of Crambled, a user may gain insight into why each tool has offered its given solution and combined with a knowledge of the disease being studied can choose between the competing solutions in an informed manner.Title redacted
http://hdl.handle.net/10023/15909
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/159092017-06-23T00:00:00ZMillar, Colin PearsonAdaptive multivariate global testing
http://hdl.handle.net/10023/15760
We present a methodology for dealing with recent challenges in testing global hypotheses using multivariate observations. The proposed tests target situations, often arising in emerging applications of neuroimaging, where the sample size n is relatively small compared with the observations' dimension K. We employ adaptive designs allowing for sequential modifications of the test statistics adapting to accumulated data. The adaptations are optimal in the sense of maximizing the predictive power of the test at each interim analysis while still controlling the Type I error. Optimality is obtained by a general result applicable to typical adaptive design settings. Further, we prove that the potentially high-dimensional design space of the tests can be reduced to a low-dimensional projection space enabling us to perform simpler power analysis studies, including comparisons to alternative tests. We illustrate the substantial improvement in efficiency that the proposed tests can make over standard tests, especially in the case of n smaller or slightly larger than K. The methods are also studied empirically using both simulated data and data from an EEG study, where the use of prior knowledge substantially increases the power of the test. Supplementary materials for this article are available online.
Sun, 01 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/157602014-06-01T00:00:00ZMinas, GiorgosAston, John A DStallard, NigelWe present a methodology for dealing with recent challenges in testing global hypotheses using multivariate observations. The proposed tests target situations, often arising in emerging applications of neuroimaging, where the sample size n is relatively small compared with the observations' dimension K. We employ adaptive designs allowing for sequential modifications of the test statistics adapting to accumulated data. The adaptations are optimal in the sense of maximizing the predictive power of the test at each interim analysis while still controlling the Type I error. Optimality is obtained by a general result applicable to typical adaptive design settings. Further, we prove that the potentially high-dimensional design space of the tests can be reduced to a low-dimensional projection space enabling us to perform simpler power analysis studies, including comparisons to alternative tests. We illustrate the substantial improvement in efficiency that the proposed tests can make over standard tests, especially in the case of n smaller or slightly larger than K. The methods are also studied empirically using both simulated data and data from an EEG study, where the use of prior knowledge substantially increases the power of the test. Supplementary materials for this article are available online.Inferring transcriptional logic from multiple dynamic experiments
http://hdl.handle.net/10023/15758
Motivation: The availability of more data of dynamic gene expression under multiple experimental conditions provides new information that makes the key goal of identifying not only the transcriptional regulators of a gene but also the underlying logical structure attainable. Results: We propose a novel method for inferring transcriptional regulation using a simple, yet biologically interpretable, model to find the logic by which a set of candidate genes and their associated transcription factors (TFs) regulate the transcriptional process of a gene of interest. Our dynamic model links the mRNA transcription rate of the target gene to the activation states of the TFs assuming that these interactions are consistent across multiple experiments and over time. A trans-dimensional Markov Chain Monte Carlo (MCMC) algorithm is used to efficiently sample the regulatory logic under different combinations of parents and rank the estimated models by their posterior probabilities. We demonstrate and compare our methodology with other methods using simulation examples and apply it to a study of transcriptional regulation of selected target genes of Arabidopsis Thaliana from microarray time series data obtained under multiple biotic stresses. We show that our method is able to detect complex regulatory interactions that are consistent under multiple experimental conditions. Availability and implementation: Programs are written in MATLAB and Statistics Toolbox Release 2016b, The MathWorks, Inc., Natick, Massachusetts, United States and are available on GitHub https://github.com/giorgosminas/TRS and at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software.
This work was supported by the Biotechnology and Biological Sciences Research Council [BB/F005806/1, BB/K003097/1], the Engineering and Physical Sciences Research Council [EP/C544587/1 to DAR] and the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 305564.
Wed, 01 Nov 2017 00:00:00 GMThttp://hdl.handle.net/10023/157582017-11-01T00:00:00ZMinas, GiorgosJenkins, Dafyd J.Rand, David A.Finkenstädt, BärbelMotivation: The availability of more data of dynamic gene expression under multiple experimental conditions provides new information that makes the key goal of identifying not only the transcriptional regulators of a gene but also the underlying logical structure attainable. Results: We propose a novel method for inferring transcriptional regulation using a simple, yet biologically interpretable, model to find the logic by which a set of candidate genes and their associated transcription factors (TFs) regulate the transcriptional process of a gene of interest. Our dynamic model links the mRNA transcription rate of the target gene to the activation states of the TFs assuming that these interactions are consistent across multiple experiments and over time. A trans-dimensional Markov Chain Monte Carlo (MCMC) algorithm is used to efficiently sample the regulatory logic under different combinations of parents and rank the estimated models by their posterior probabilities. We demonstrate and compare our methodology with other methods using simulation examples and apply it to a study of transcriptional regulation of selected target genes of Arabidopsis Thaliana from microarray time series data obtained under multiple biotic stresses. We show that our method is able to detect complex regulatory interactions that are consistent under multiple experimental conditions. Availability and implementation: Programs are written in MATLAB and Statistics Toolbox Release 2016b, The MathWorks, Inc., Natick, Massachusetts, United States and are available on GitHub https://github.com/giorgosminas/TRS and at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software.ReTrOS : a MATLAB toolbox for reconstructing transcriptional activity from gene and protein expression data
http://hdl.handle.net/10023/15759
BACKGROUND: Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS-Smooth and ReTrOS-Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation. RESULTS: The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions. CONCLUSIONS: The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.
This work was supported through providing funds by the Biotechnology and Biological Sciences Research Council [BB/F005806/1, BB/F005237/1]; and the Engineering and Physical Sciences Research Council [EP/C544587/1 to DAR].
Mon, 26 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/157592017-06-26T00:00:00ZMinas, GiorgosMomiji, HiroshiJenkins, Dafyd JCosta, Maria JRand, David AFinkenstädt, BärbelBACKGROUND: Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS-Smooth and ReTrOS-Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation. RESULTS: The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions. CONCLUSIONS: The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.Incorporating animal movement into circular plot and point transect surveys of wildlife abundance
http://hdl.handle.net/10023/15612
Estimating wildlife abundance is fundamental for its effective management and conservation.
A range of methods exist: total counts, plot sampling, distance sampling and
capture-recapture based approaches. Methods have assumptions and their failure can
lead to substantial bias. Current research in the field is focused not on establishing new
methods but in extending existing methods to deal with their assumptions' violation.
This thesis focus on incorporating animal movement into circular plot sampling (CPS)
and point transect sampling (PTS), where a key assumption is that animals do not move
while within detection range, i.e., the survey is a snapshot in time. While targeting this
goal, we found some unexpected bias in PTS when animals were still and model selection
was used to choose among different candidate models for the detection function (the
model describing how detectability changes with observer-animal distance). Using a simulation
study, we found that, although PTS estimators are asymptotically unbiased, for
the recommended sample sizes the bias depended on the form of the true detection function.
We then extended the simulation study to include animal movement, and found this
led to further bias in CPS and PTS. We present novel methods that incorporate animal
movement with constant speed into estimates of abundance. First, in CPS, we present
an analytic expression to correct for the bias given linear movement. When movement
is de ned by a diffusion process, a simulation based approach, modelling the probability
of animal presence in the circular plot, results in less than 3% bias in the abundance
estimates. For PTS we introduce an estimator composed of two linked submodels: the
movement (animals moving linearly) and the detection model. The performance of the
proposed method is assessed via simulation. Despite being biased, the new estimator
yields improved results compared to ignoring animal movement using conventional PTS.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10023/156122018-01-01T00:00:00ZPrieto González, RocíoEstimating wildlife abundance is fundamental for its effective management and conservation.
A range of methods exist: total counts, plot sampling, distance sampling and
capture-recapture based approaches. Methods have assumptions and their failure can
lead to substantial bias. Current research in the field is focused not on establishing new
methods but in extending existing methods to deal with their assumptions' violation.
This thesis focus on incorporating animal movement into circular plot sampling (CPS)
and point transect sampling (PTS), where a key assumption is that animals do not move
while within detection range, i.e., the survey is a snapshot in time. While targeting this
goal, we found some unexpected bias in PTS when animals were still and model selection
was used to choose among different candidate models for the detection function (the
model describing how detectability changes with observer-animal distance). Using a simulation
study, we found that, although PTS estimators are asymptotically unbiased, for
the recommended sample sizes the bias depended on the form of the true detection function.
We then extended the simulation study to include animal movement, and found this
led to further bias in CPS and PTS. We present novel methods that incorporate animal
movement with constant speed into estimates of abundance. First, in CPS, we present
an analytic expression to correct for the bias given linear movement. When movement
is de ned by a diffusion process, a simulation based approach, modelling the probability
of animal presence in the circular plot, results in less than 3% bias in the abundance
estimates. For PTS we introduce an estimator composed of two linked submodels: the
movement (animals moving linearly) and the detection model. The performance of the
proposed method is assessed via simulation. Despite being biased, the new estimator
yields improved results compared to ignoring animal movement using conventional PTS.A continuous-time formulation for spatial capture-recapture models
http://hdl.handle.net/10023/15596
Spatial capture-recapture (SCR) models are relatively new but have become the
standard approach used to estimate animal density from capture-recapture data. It
has in the past been impractical to obtain sufficient data for analysis on species that
are very difficult to capture such as elusive carnivores that occur at low density and
range very widely. Advances in technology have led to alternative ways to virtually
“capture" individuals without having to physically hold them. Some examples of
these new non-invasive sampling methods include scat or hair collection for genetic
analysis, acoustic detection and camera trapping.
In traditional capture-recapture (CR) and SCR studies populations are sampled
at discrete points in time leading to clear and well defined occasions whereas the
new detector types mentioned above sample populations continuously in time. Researchers
with data collected continuously currently need to define an appropriate
occasion and aggregate their data accordingly thereby imposing an artificial construct
on their data for analytical convenience.
This research develops a continuous-time (CT) framework for SCR models by
treating detections as a temporal non homogeneous Poisson process (NHPP) and
replacing the usual SCR detection function with a continuous detection hazard function.
The general CT likelihood is first developed for data from passive (also called
“proximity") detectors like camera traps that do not physically hold individuals. The
likelihood is then modified to produce a likelihood for single-catch traps (traps that
are taken out of action by capturing an animal) that has proven difficult to develop
with a discrete-occasion approach.
The lack of a suitable single-catch trap likelihood has led to researchers using
a discrete-time (DT) multi-catch trap estimator to analyse single-catch trap data.
Previous work has found the DT multi-catch estimator to be robust despite the fact
that it is known to be based on the wrong model for single-catch traps (it assumes
that the traps continue operating after catching an individual). Simulation studies in
this work confirm that the multi-catch estimator is robust for estimating density when
density is constant or does not vary much in space. However, there are scenarios with
non-constant density surfaces when the multi-catch estimator is not able to correctly
identify regions of high density. Furthermore, the multi-catch estimator is known
to be negatively biased for the intercept parameter of SCR detection functions and
there may be interest in the detection function in its own right. On the other hand
the CT single-catch estimator is unbiased or nearly so for all parameters of interest
including those in the detection function and those in the model for density.
When one assumes that the detection hazard is constant through time there is
no impact of ignoring capture times and using only the detection frequencies. This
is of course a special case and in reality detection hazards will tend to vary in time.
However when one assumes that the effects of time and distance in the time-varying
hazard are independent, then similarly there is no information in the capture times
about density and detection function parameters. The work here uses a detection
hazard that assumes independence between time and distance. Different forms for
the detection hazard are explored with the most flexible choice being that of a cyclic
regression spline.
Extensive simulation studies suggest as expected that a DT proximity estimator is
unbiased for the estimation of density even when the detection hazard varies though
time. However there are indirect benefits of incorporating capture times because
doing so will lead to a better fitting detection component of the model, and this can
prevent unexplained variation being erroneously attributed to the wrong covariate.
The analysis of two real datasets supports this assertion because the models with the
best fitting detection hazard have different effects to the other models. In addition,
modelling the detection process in continuous-time leads to a more parsimonious
approach compared to using DT models when the detection hazard varies in time.
The underlying process is occurring in continuous-time and so using CT models
allows inferences to be drawn about the underlying process, for example the timevarying
detection hazard can be viewed as a proxy for animal activity. The CT
formulation is able to model the underlying detection hazard accurately and provides
a formal modelling framework to explore different hypotheses about activity patterns.
There is scope to integrate the CT models developed here with models for space usage
and landscape connectivity to explore these processes on a finer temporal scale.
SCR models are experiencing a rapid growth in both application and method
development. The data generating process occurs in CT and hence a CT modelling
approach is a natural fit and opens up several opportunities that are not possible
with a DT formulation. The work here makes a contribution by developing and
exploring the utility of such a CT SCR formulation.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/155962017-01-01T00:00:00ZDistiller, GregSpatial capture-recapture (SCR) models are relatively new but have become the
standard approach used to estimate animal density from capture-recapture data. It
has in the past been impractical to obtain sufficient data for analysis on species that
are very difficult to capture such as elusive carnivores that occur at low density and
range very widely. Advances in technology have led to alternative ways to virtually
“capture" individuals without having to physically hold them. Some examples of
these new non-invasive sampling methods include scat or hair collection for genetic
analysis, acoustic detection and camera trapping.
In traditional capture-recapture (CR) and SCR studies populations are sampled
at discrete points in time leading to clear and well defined occasions whereas the
new detector types mentioned above sample populations continuously in time. Researchers
with data collected continuously currently need to define an appropriate
occasion and aggregate their data accordingly thereby imposing an artificial construct
on their data for analytical convenience.
This research develops a continuous-time (CT) framework for SCR models by
treating detections as a temporal non homogeneous Poisson process (NHPP) and
replacing the usual SCR detection function with a continuous detection hazard function.
The general CT likelihood is first developed for data from passive (also called
“proximity") detectors like camera traps that do not physically hold individuals. The
likelihood is then modified to produce a likelihood for single-catch traps (traps that
are taken out of action by capturing an animal) that has proven difficult to develop
with a discrete-occasion approach.
The lack of a suitable single-catch trap likelihood has led to researchers using
a discrete-time (DT) multi-catch trap estimator to analyse single-catch trap data.
Previous work has found the DT multi-catch estimator to be robust despite the fact
that it is known to be based on the wrong model for single-catch traps (it assumes
that the traps continue operating after catching an individual). Simulation studies in
this work confirm that the multi-catch estimator is robust for estimating density when
density is constant or does not vary much in space. However, there are scenarios with
non-constant density surfaces when the multi-catch estimator is not able to correctly
identify regions of high density. Furthermore, the multi-catch estimator is known
to be negatively biased for the intercept parameter of SCR detection functions and
there may be interest in the detection function in its own right. On the other hand
the CT single-catch estimator is unbiased or nearly so for all parameters of interest
including those in the detection function and those in the model for density.
When one assumes that the detection hazard is constant through time there is
no impact of ignoring capture times and using only the detection frequencies. This
is of course a special case and in reality detection hazards will tend to vary in time.
However when one assumes that the effects of time and distance in the time-varying
hazard are independent, then similarly there is no information in the capture times
about density and detection function parameters. The work here uses a detection
hazard that assumes independence between time and distance. Different forms for
the detection hazard are explored with the most flexible choice being that of a cyclic
regression spline.
Extensive simulation studies suggest as expected that a DT proximity estimator is
unbiased for the estimation of density even when the detection hazard varies though
time. However there are indirect benefits of incorporating capture times because
doing so will lead to a better fitting detection component of the model, and this can
prevent unexplained variation being erroneously attributed to the wrong covariate.
The analysis of two real datasets supports this assertion because the models with the
best fitting detection hazard have different effects to the other models. In addition,
modelling the detection process in continuous-time leads to a more parsimonious
approach compared to using DT models when the detection hazard varies in time.
The underlying process is occurring in continuous-time and so using CT models
allows inferences to be drawn about the underlying process, for example the timevarying
detection hazard can be viewed as a proxy for animal activity. The CT
formulation is able to model the underlying detection hazard accurately and provides
a formal modelling framework to explore different hypotheses about activity patterns.
There is scope to integrate the CT models developed here with models for space usage
and landscape connectivity to explore these processes on a finer temporal scale.
SCR models are experiencing a rapid growth in both application and method
development. The data generating process occurs in CT and hence a CT modelling
approach is a natural fit and opens up several opportunities that are not possible
with a DT formulation. The work here makes a contribution by developing and
exploring the utility of such a CT SCR formulation.Statistical issues in first-in-human studies on BIA 10-2474: neglected comparison of protocol against practice
http://hdl.handle.net/10023/12740
By setting the regulatory-approved protocol for a suite of first-in-human studies on BIA 10-2474 against the subsequent French investigations, we highlight six key design and statistical issues which reinforce recommendations by a Royal Statistical Society Working Party which were made in the aftermath of cytokine release storm in six healthy volunteers in the UK in 2006. The 6 issues are dose determination, availability of pharmacokinetic results, dosing interval, stopping rules, appraisal by safety committee, and clear algorithm required if combining approvals for single and multiple ascending dose studies.
Funding information: European Union's FP7 programme, Grant/Award Number: 602552
Wed, 15 Mar 2017 00:00:00 GMThttp://hdl.handle.net/10023/127402017-03-15T00:00:00ZBird, Sheila M.Bailey, Rosemary A.Grieve, Andrew P.Senn, StephenBy setting the regulatory-approved protocol for a suite of first-in-human studies on BIA 10-2474 against the subsequent French investigations, we highlight six key design and statistical issues which reinforce recommendations by a Royal Statistical Society Working Party which were made in the aftermath of cytokine release storm in six healthy volunteers in the UK in 2006. The 6 issues are dose determination, availability of pharmacokinetic results, dosing interval, stopping rules, appraisal by safety committee, and clear algorithm required if combining approvals for single and multiple ascending dose studies.Modelling the spatial dynamics of non-state terrorism : world study, 2002-2013
http://hdl.handle.net/10023/12067
To this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.
Thu, 07 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/120672017-12-07T00:00:00ZPython, AndréTo this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.Modelling complex dependencies inherent in spatial and spatio-temporal point pattern data
http://hdl.handle.net/10023/12009
Point processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations.
This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation.
Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.
Fri, 23 Jun 2017 00:00:00 GMThttp://hdl.handle.net/10023/120092017-06-23T00:00:00ZJones-Todd, Charlotte MPoint processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations.
This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation.
Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.Correlation estimation using components of Japanese candlesticks
http://hdl.handle.net/10023/11901
Using the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/119012016-01-01T00:00:00ZPopov, Valentin MinaUsing the wick's difference from the classical Japanese candlestick representation of daily open, high, low, close prices brings efficiency when estimating the correlation in a bivariate Brownian motion. An interpretation of the correlation estimator in Rogers and Zhou (2008) in the light of wicks' difference allows us to suggest modifications, which lead to an increased efficiency and robustness against the baseline model. An empirical study on four major financial markets confirms the advantages of the modified estimator.Title redacted
http://hdl.handle.net/10023/11739
Thu, 07 Dec 2017 00:00:00 GMThttp://hdl.handle.net/10023/117392017-12-07T00:00:00ZSharifi Far, ServehInference from randomized (factorial) experiments
http://hdl.handle.net/10023/11606
This is a contribution to the discussion of the interesting paper by Ding [Statist. Sci. 32 (2017) 331–345], which contrasts approaches attributed to Neyman and Fisher. I believe that Fisher’s usual assumption was unit-treatment additivity, rather than the “sharp null hypothesis” attributed to him. Fisher also developed the notion of interaction in factorial experiments. His explanation leads directly to the concept of marginality, which is essential for the interpretation of data from any factorial experiment.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/116062017-01-01T00:00:00ZBailey, Rosemary AnneThis is a contribution to the discussion of the interesting paper by Ding [Statist. Sci. 32 (2017) 331–345], which contrasts approaches attributed to Neyman and Fisher. I believe that Fisher’s usual assumption was unit-treatment additivity, rather than the “sharp null hypothesis” attributed to him. Fisher also developed the notion of interaction in factorial experiments. His explanation leads directly to the concept of marginality, which is essential for the interpretation of data from any factorial experiment.Measuring temporal trends in biodiversity
http://hdl.handle.net/10023/11534
In 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. In order to assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that were prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising for example from climate change.
Yuan was part-funded by EPSRC/NERC Grant EP/1000917/1 and Marcon by ANR-10-LABX-25-01.
Sun, 01 Oct 2017 00:00:00 GMThttp://hdl.handle.net/10023/115342017-10-01T00:00:00ZBuckland, S. T.Yuan, Y.Marcon, EricIn 2002, nearly 200 nations signed up to the 2010 target of the Convention for Biological Diversity, ‘to significantly reduce the rate of biodiversity loss by 2010’. In order to assess whether the target was met, it became necessary to quantify temporal trends in measures of diversity. This resulted in a marked shift in focus for biodiversity measurement. We explore the developments in measuring biodiversity that were prompted by the 2010 target. We consider measures based on species proportions, and also explain why a geometric mean of relative abundance estimates was preferred to such measures for assessing progress towards the target. We look at the use of diversity profiles, and consider how species similarity can be incorporated into diversity measures. We also discuss measures of turnover that can be used to quantify shifts in community composition arising for example from climate change.Epigenetic and oncogenic regulation of SLC16A7 (MCT2) results in protein over-expression, impacting on signalling and cellular phenotypes in prostate cancer
http://hdl.handle.net/10023/11445
Monocarboxylate Transporter 2 (MCT2) is a major pyruvate transporter encoded by the SLC16A7 gene. Recent studies pointed to a consistent overexpression of MCT2 in prostate cancer (PCa) suggesting MCT2 as a putative biomarker and molecular target. Despite the importance of this observation the mechanisms involved in MCT2 regulation are unknown. Through an integrative analysis we have discovered that selective demethylation of an internal SLC16A7/MCT2 promoter is a recurrent event in independent PCa cohorts. This demethylation is associated with expression of isoforms differing only in 5'-UTR translational control motifs, providing one contributing mechanism for MCT2 protein overexpression in PCa. Genes co-expressed with SLC16A7/MCT2 also clustered in oncogenic-related pathways and effectors of these signalling pathways were found to bind at the SLC16A7/MCT2 gene locus. Finally, MCT2 knock-down attenuated the growth of PCa cells. The present study unveils an unexpected epigenetic regulation of SLC16A7/MCT2 isoforms and identifies a link between SLC16A7/MCT2, Androgen Receptor (AR), ETS-related gene (ERG) and other oncogenic pathways in PCa. These results underscore the importance of combining data from epigenetic, transcriptomic and protein level changes to allow more comprehensive insights into the mechanisms underlying protein expression, that in our case provide additional weight to MCT2 as a candidate biomarker and molecular target in PCa.
Felisbino S. received a fellowship from the Sao Paulo Research Foundation (FAPESP) ref. 2013/08830-2 and 2013/06802-1. Anne Y Warren research time funded by: Cambridge Biomedical Research Centre.
Tue, 02 Jun 2015 00:00:00 GMThttp://hdl.handle.net/10023/114452015-06-02T00:00:00ZPértega-Gomes, NelmaVizcaino, Jose R.Felisbino, SergioWarren, Anne Y.Shaw, GregKay, JonathanWhitaker, HayleyLynch, Andy G.Fryer, LeeNeal, David E.Massie, Charles E.Monocarboxylate Transporter 2 (MCT2) is a major pyruvate transporter encoded by the SLC16A7 gene. Recent studies pointed to a consistent overexpression of MCT2 in prostate cancer (PCa) suggesting MCT2 as a putative biomarker and molecular target. Despite the importance of this observation the mechanisms involved in MCT2 regulation are unknown. Through an integrative analysis we have discovered that selective demethylation of an internal SLC16A7/MCT2 promoter is a recurrent event in independent PCa cohorts. This demethylation is associated with expression of isoforms differing only in 5'-UTR translational control motifs, providing one contributing mechanism for MCT2 protein overexpression in PCa. Genes co-expressed with SLC16A7/MCT2 also clustered in oncogenic-related pathways and effectors of these signalling pathways were found to bind at the SLC16A7/MCT2 gene locus. Finally, MCT2 knock-down attenuated the growth of PCa cells. The present study unveils an unexpected epigenetic regulation of SLC16A7/MCT2 isoforms and identifies a link between SLC16A7/MCT2, Androgen Receptor (AR), ETS-related gene (ERG) and other oncogenic pathways in PCa. These results underscore the importance of combining data from epigenetic, transcriptomic and protein level changes to allow more comprehensive insights into the mechanisms underlying protein expression, that in our case provide additional weight to MCT2 as a candidate biomarker and molecular target in PCa.Estimating Key Largo woodrat abundance using spatially explicit capture–recapture and trapping point transects
http://hdl.handle.net/10023/10625
The Key Largo woodrat (Neotoma floridana smalli) is an endangered rodent with a restricted geographic range and small population size. Establishing an efficient monitoring program of its abundance has been problematic; previous trapping designs have not worked well because the species is sparsely distributed. We compared Key Largo woodrat abundance estimates in Key Largo, Florida, USA, obtained using trapping point transects (TPT) and spatially explicit capture–recapture (SECR) based on statistical properties, survey effort, practicality, and cost. Both methods combine aspects of distance sampling with capture–recapture, but TPT relies on radiotracking individuals to estimate detectability and SECR relies on repeat capture information to estimate densities of home ranges. Abundance estimates using TPT in the spring of 2007 and 2008 were 333 woodrats (CV = 0.46) and 696 (CV = 0.43), respectively. Abundance estimates using SECR in the spring, summer, and winter of 2007 were 97 (CV = 0.31), 334 (CV = 0.26), and 433 (CV = 0.20) animals, respectively. Trapping point transects used approximately 960 person-hours and 1,010 trap-nights/season. Spatially explicit capture–recapture used approximately 500 person-hours and 6,468 trap-nights/season. Significant time was saved in the SECR survey by setting large numbers of traps close together, minimizing time walking between traps. Trapping point transects were practical to implement in the field, and valuable auxiliary information on Key Largo woodrat behavior was obtained via radiocollaring. In this particular study, detectability of the woodrat using TPT was very low and consequently the SECR method was more efficient. Both methods require a substantial investment in survey effort to detect any change in abundance because of large uncertainty in estimates.
JMP was funded by Disney's Animal Programs, the US Fish and Wildlife Service and University of St Andrews.
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/106252016-06-01T00:00:00ZPotts, Joanne MarieBuckland, Stephen TerrenceThomas, LenSavage, AnneThe Key Largo woodrat (Neotoma floridana smalli) is an endangered rodent with a restricted geographic range and small population size. Establishing an efficient monitoring program of its abundance has been problematic; previous trapping designs have not worked well because the species is sparsely distributed. We compared Key Largo woodrat abundance estimates in Key Largo, Florida, USA, obtained using trapping point transects (TPT) and spatially explicit capture–recapture (SECR) based on statistical properties, survey effort, practicality, and cost. Both methods combine aspects of distance sampling with capture–recapture, but TPT relies on radiotracking individuals to estimate detectability and SECR relies on repeat capture information to estimate densities of home ranges. Abundance estimates using TPT in the spring of 2007 and 2008 were 333 woodrats (CV = 0.46) and 696 (CV = 0.43), respectively. Abundance estimates using SECR in the spring, summer, and winter of 2007 were 97 (CV = 0.31), 334 (CV = 0.26), and 433 (CV = 0.20) animals, respectively. Trapping point transects used approximately 960 person-hours and 1,010 trap-nights/season. Spatially explicit capture–recapture used approximately 500 person-hours and 6,468 trap-nights/season. Significant time was saved in the SECR survey by setting large numbers of traps close together, minimizing time walking between traps. Trapping point transects were practical to implement in the field, and valuable auxiliary information on Key Largo woodrat behavior was obtained via radiocollaring. In this particular study, detectability of the woodrat using TPT was very low and consequently the SECR method was more efficient. Both methods require a substantial investment in survey effort to detect any change in abundance because of large uncertainty in estimates.Assigning stranded bottlenose dolphins to source stocks using stable isotope ratios following the Deepwater Horizon oil spill
http://hdl.handle.net/10023/10588
The potential for stranded dolphins to serve as a tool for monitoring free-ranging populations would be enhanced if their stocks of origin were known. We used stable isotopes of carbon, nitrogen, and sulfur from skin to assign stranded bottlenose dolphins Tursiops truncatus to different habitats, as a proxy for stocks (demographically independent populations), following the Deepwater Horizon oil spill. Model results from biopsy samples collected from dolphins from known habitats (n = 205) resulted in an 80.5% probability of correct assignment. These results were applied to data from stranded dolphins (n = 217), resulting in predicted assignment probabilities of 0.473, 0.172, and 0.355 to Estuarine, Barrier Island (BI), and Coastal stocks, respectively. Differences were found west and east of the Mississippi River, with more Coastal dolphins stranding in western Louisiana and more Estuarine dolphins stranding in Mississippi. Within the Estuarine East Stock, 2 groups were identified, one predominantly associated with Mississippi and Alabama estuaries and another with western Florida. δ15N values were higher in stranded samples for both Estuarine and BI stocks, potentially indicating nutritional stress. High probabilities of correct assignment of the biopsy samples indicate predictable variation in stable isotopes and fidelity to habitat. The power of δ34S to discriminate habitats relative to salinity was essential. Stable isotopes may provide guidance regarding where additional testing is warranted to confirm demographic independence and aid in determining the source habitat of stranded dolphins, thus increasing the value of biological data collected from stranded individuals.
Tue, 31 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10023/105882017-01-31T00:00:00ZHohn, A. A.Thomas, L.Carmichael, R. H.Litz, J.Clemons-Chevis, C.Shippee, S. F.Sinclair, C.Smith, S.Speakman, T. R.Tumlin, M. C.Zolman, E. S.The potential for stranded dolphins to serve as a tool for monitoring free-ranging populations would be enhanced if their stocks of origin were known. We used stable isotopes of carbon, nitrogen, and sulfur from skin to assign stranded bottlenose dolphins Tursiops truncatus to different habitats, as a proxy for stocks (demographically independent populations), following the Deepwater Horizon oil spill. Model results from biopsy samples collected from dolphins from known habitats (n = 205) resulted in an 80.5% probability of correct assignment. These results were applied to data from stranded dolphins (n = 217), resulting in predicted assignment probabilities of 0.473, 0.172, and 0.355 to Estuarine, Barrier Island (BI), and Coastal stocks, respectively. Differences were found west and east of the Mississippi River, with more Coastal dolphins stranding in western Louisiana and more Estuarine dolphins stranding in Mississippi. Within the Estuarine East Stock, 2 groups were identified, one predominantly associated with Mississippi and Alabama estuaries and another with western Florida. δ15N values were higher in stranded samples for both Estuarine and BI stocks, potentially indicating nutritional stress. High probabilities of correct assignment of the biopsy samples indicate predictable variation in stable isotopes and fidelity to habitat. The power of δ34S to discriminate habitats relative to salinity was essential. Stable isotopes may provide guidance regarding where additional testing is warranted to confirm demographic independence and aid in determining the source habitat of stranded dolphins, thus increasing the value of biological data collected from stranded individuals.A simulation approach to assessing environmental risk of sound exposure to marine mammals
http://hdl.handle.net/10023/10382
Intense underwater sounds caused by military sonar, seismic surveys, and pile driving can harm acoustically sensitive marine mammals. Many jurisdictions require such activities to undergo marine mammal impact assessments to guide mitigation. However, the ability to assess impacts in a rigorous, quantitative way is hindered by large knowledge gaps concerning hearing ability, sensitivity, and behavioral responses to noise exposure. We describe a simulation-based framework, called SAFESIMM (Statistical Algorithms For Estimating the Sonar Influence on Marine Megafauna), that can be used to calculate the numbers of agents (animals) likely to be affected by intense underwater sounds. We illustrate the simulation framework using two species that are likely to be affected by marine renewable energy developments in UK waters: gray seal (Halichoerus grypus) and harbor porpoise (Phocoena phocoena). We investigate three sources of uncertainty: How sound energy is perceived by agents with differing hearing abilities; how agents move in response to noise (i.e., the strength and directionality of their evasive movements); and the way in which these responses may interact with longer term constraints on agent movement. The estimate of received sound exposure level (SEL) is influenced most strongly by the weighting function used to account for the specie's presumed hearing ability. Strongly directional movement away from the sound source can cause modest reductions (~5 dB) in SEL over the short term (periods of less than 10 days). Beyond 10 days, the way in which agents respond to noise exposure has little or no effect on SEL, unless their movements are constrained by natural boundaries. Most experimental studies of noise impacts have been short-term. However, data are needed on long-term effects because uncertainty about predicted SELs accumulates over time. Synthesis and applications. Simulation frameworks offer a powerful way to explore, understand, and estimate effects of cumulative sound exposure on marine mammals and to quantify associated levels of uncertainty. However, they can often require subjective decisions that have important consequences for management recommendations, and the basis for these decisions must be clearly described.
Sat, 01 Apr 2017 00:00:00 GMThttp://hdl.handle.net/10023/103822017-04-01T00:00:00ZDonovan, Carl R.Harris, Catriona M.Milazzo, LorenzoHarwood, JohnMarshall, LauraWilliams, RobIntense underwater sounds caused by military sonar, seismic surveys, and pile driving can harm acoustically sensitive marine mammals. Many jurisdictions require such activities to undergo marine mammal impact assessments to guide mitigation. However, the ability to assess impacts in a rigorous, quantitative way is hindered by large knowledge gaps concerning hearing ability, sensitivity, and behavioral responses to noise exposure. We describe a simulation-based framework, called SAFESIMM (Statistical Algorithms For Estimating the Sonar Influence on Marine Megafauna), that can be used to calculate the numbers of agents (animals) likely to be affected by intense underwater sounds. We illustrate the simulation framework using two species that are likely to be affected by marine renewable energy developments in UK waters: gray seal (Halichoerus grypus) and harbor porpoise (Phocoena phocoena). We investigate three sources of uncertainty: How sound energy is perceived by agents with differing hearing abilities; how agents move in response to noise (i.e., the strength and directionality of their evasive movements); and the way in which these responses may interact with longer term constraints on agent movement. The estimate of received sound exposure level (SEL) is influenced most strongly by the weighting function used to account for the specie's presumed hearing ability. Strongly directional movement away from the sound source can cause modest reductions (~5 dB) in SEL over the short term (periods of less than 10 days). Beyond 10 days, the way in which agents respond to noise exposure has little or no effect on SEL, unless their movements are constrained by natural boundaries. Most experimental studies of noise impacts have been short-term. However, data are needed on long-term effects because uncertainty about predicted SELs accumulates over time. Synthesis and applications. Simulation frameworks offer a powerful way to explore, understand, and estimate effects of cumulative sound exposure on marine mammals and to quantify associated levels of uncertainty. However, they can often require subjective decisions that have important consequences for management recommendations, and the basis for these decisions must be clearly described.The challenges of analyzing behavioral response study data : an overview of the MOCHA (Multi-study OCean acoustics Human effects Analysis) project
http://hdl.handle.net/10023/9923
This paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.
Date of Acceptance:
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10023/99232016-01-01T00:00:00ZHarris, Catriona MThomas, LenSadykova, DinaraDe Ruiter, Stacy LynnTyack, Peter LloydSouthall, Brandon L.Read, Andrew J.Miller, PatrickThis paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.Habitat complexity in aquatic microcosms affects processes driven by detrivores
http://hdl.handle.net/10023/9749
Habitat complexity can influence predation rates (e.g. by providing refuge) but other ecosystem processes and species interactions might also be modulated by the properties of habitat structure. Here, we focussed on how complexity of artificial habitat (plastic plants), in microcosms, influenced short-term processes driven by three aquatic detritivores. The effects of habitat complexity on leaf decomposition, production of fine organic matter and pH levels were explored by measuring complexity in three ways: 1. as the presence vs. absence of habitat structure; 2. as the amount of structure (3 or 4.5 g of plastic plants); and 3. as the spatial configuration of structures (measured as fractal dimension). The experiment also addressed potential interactions among the consumers by running all possible species combinations. In the experimental microcosms, habitat complexity influenced how species performed, especially when comparing structure present vs. structure absent. Treatments with structure showed higher fine particulate matter production and lower pH compared to treatments without structures and this was probably due to higher digestion and respiration when structures were present. When we explored the effects of the different complexity levels, we found that the amount of structure added explained more than the fractal dimension of the structures. We give a detailed overview of the experimental design, statistical models and R codes, because our statistical analysis can be applied to other study systems (and disciplines such as restoration ecology). We further make suggestions of how to optimise statistical power when artificially assembling, and analysing, ‘habitat complexity’ by not confounding complexity with the amount of structure added. In summary, this study highlights the importance of habitat complexity for energy flow and the maintenance of ecosystem processes in aquatic ecosystems.
LF was supported in part by the Spanish Ministry of Economy and Competitiveness through the project SCARCE Consolider-Ingenio CSD2009-00065.
Tue, 01 Nov 2016 00:00:00 GMThttp://hdl.handle.net/10023/97492016-11-01T00:00:00ZFlores, LoreaBailey, R. A.Elosegi, ArturoLarrañaga, AitorReiss, JuliaHabitat complexity can influence predation rates (e.g. by providing refuge) but other ecosystem processes and species interactions might also be modulated by the properties of habitat structure. Here, we focussed on how complexity of artificial habitat (plastic plants), in microcosms, influenced short-term processes driven by three aquatic detritivores. The effects of habitat complexity on leaf decomposition, production of fine organic matter and pH levels were explored by measuring complexity in three ways: 1. as the presence vs. absence of habitat structure; 2. as the amount of structure (3 or 4.5 g of plastic plants); and 3. as the spatial configuration of structures (measured as fractal dimension). The experiment also addressed potential interactions among the consumers by running all possible species combinations. In the experimental microcosms, habitat complexity influenced how species performed, especially when comparing structure present vs. structure absent. Treatments with structure showed higher fine particulate matter production and lower pH compared to treatments without structures and this was probably due to higher digestion and respiration when structures were present. When we explored the effects of the different complexity levels, we found that the amount of structure added explained more than the fractal dimension of the structures. We give a detailed overview of the experimental design, statistical models and R codes, because our statistical analysis can be applied to other study systems (and disciplines such as restoration ecology). We further make suggestions of how to optimise statistical power when artificially assembling, and analysing, ‘habitat complexity’ by not confounding complexity with the amount of structure added. In summary, this study highlights the importance of habitat complexity for energy flow and the maintenance of ecosystem processes in aquatic ecosystems.Bayesian multi-species modelling of non-negative continuous ecological data with a discrete mass at zero
http://hdl.handle.net/10023/9626
Severe declines in the number of some songbirds over the last 40 years
have caused heated debate amongst interested parties. Many factors
have been suggested as possible causes for these declines, including
an increase in the abundance and distribution of an avian predator,
the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a
predator effect on the abundance of its prey, we analyse data on 10
species visiting garden bird feeding stations monitored by the British
Trust for Ornithology in relation to the abundance of sparrowhawks.
We apply Bayesian hierarchical models to data relating to averaged
maximum weekly counts from a garden bird monitoring survey. These
data are essentially continuous, bounded below by zero, but for many
species show a marked spike at zero that many standard distributions
would not be able to account for. We use the Tweedie distributions,
which for certain areas of parameter space relate to continuous nonnegative
distributions with a discrete probability mass at zero, and
are hence able to deal with the shape of the empirical distributions of
the data.
The methods developed in this thesis begin by modelling single prey
species independently with an avian predator as a covariate, using
MCMC methods to explore parameter and model spaces. This model
is then extended to a multiple-prey species model, testing for interactions
between species as well as synchrony in their response to environmental
factors and unobserved variation.
Finally we use a relatively new methodological framework, namely
the SPDE approach in the INLA framework, to fit a multi-species
spatio-temporal model to the ecological data.
The results from the analyses are consistent with the hypothesis that
sparrowhawks are suppressing the numbers of some species of birds
visiting garden feeding stations. Only the species most susceptible to
sparrowhawk predation seem to be affected.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10023/96262015-01-01T00:00:00ZSwallow, BenSevere declines in the number of some songbirds over the last 40 years
have caused heated debate amongst interested parties. Many factors
have been suggested as possible causes for these declines, including
an increase in the abundance and distribution of an avian predator,
the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a
predator effect on the abundance of its prey, we analyse data on 10
species visiting garden bird feeding stations monitored by the British
Trust for Ornithology in relation to the abundance of sparrowhawks.
We apply Bayesian hierarchical models to data relating to averaged
maximum weekly counts from a garden bird monitoring survey. These
data are essentially continuous, bounded below by zero, but for many
species show a marked spike at zero that many standard distributions
would not be able to account for. We use the Tweedie distributions,
which for certain areas of parameter space relate to continuous nonnegative
distributions with a discrete probability mass at zero, and
are hence able to deal with the shape of the empirical distributions of
the data.
The methods developed in this thesis begin by modelling single prey
species independently with an avian predator as a covariate, using
MCMC methods to explore parameter and model spaces. This model
is then extended to a multiple-prey species model, testing for interactions
between species as well as synchrony in their response to environmental
factors and unobserved variation.
Finally we use a relatively new methodological framework, namely
the SPDE approach in the INLA framework, to fit a multi-species
spatio-temporal model to the ecological data.
The results from the analyses are consistent with the hypothesis that
sparrowhawks are suppressing the numbers of some species of birds
visiting garden feeding stations. Only the species most susceptible to
sparrowhawk predation seem to be affected.Gauging allowable harm limits to cumulative, sub-lethal effects of human activities on wildlife : a case-study approach using two whale populations
http://hdl.handle.net/10023/8716
As sublethal human pressures on marine wildlife and their habitats increase and interact in complex ways, there is a pressing need for methods to quantify cumulative impacts of these stressors on populations, and policy decisions about allowable harm limits. Few studies quantify population consequences of individual stressors, and fewer quantify synergistic effects. Incorporating all sources of uncertainty can cause predictions to span the range from negligible to catastrophic. Two places were identified to bound this problem through energetic mechanisms that reduce prey available to individuals. First, the US Marine Mammal Protection Act's Potential Biological Removal (PBR) equation was used as a placeholder allowable harm limit to represent the number of animals that can be removed annually without depleting a population below agreed-upon management targets. That rephrased the research question from, “How big could cumulative impacts be?” to “How big would cumulative impacts have to be to exceed an agreed-upon threshold?” Secondly, two data-rich case studies, namely Gulf of Maine humpback and northeast Pacific resident killer whales, were used as examples to parameterize the weakest link, namely between prey availability and demography. Given no additional information, the model predicted that human activities need only reduce prey available to the killer whale population by ~10% to cause a population-level take, through reduced fecundity and/or survival, equivalent to PBR. By contrast, in the humpback population, reduction in prey availability of ~50% was needed to cause a similar, PBR-sized effect. The paper describes an approach – results are merely illustrative. The two case studies differ in prey specialization, life history, and, no doubt, proximity to carrying capacity. This method of inverting the problem refocuses discussions around what the level of prey depletion – via competition with commercial fisheries, displacement from feeding areas through noise-generating activities, or acoustic masking of signals used to detect prey – would have to occur to exceed allowable harm limits set for lethal takes in fisheries or other, more easily quantifiable, human activities.
Rob Williams was supported by a Marie Curie International Incoming Fellowship within the 7th European Community Framework Programme (Project CONCEAL, FP7, PIIF-GA-2009-253407).
Mon, 01 Aug 2016 00:00:00 GMThttp://hdl.handle.net/10023/87162016-08-01T00:00:00ZWilliams, RobThomas, LenAshe, ErinClark, Christopher W.Hammond, Philip S.As sublethal human pressures on marine wildlife and their habitats increase and interact in complex ways, there is a pressing need for methods to quantify cumulative impacts of these stressors on populations, and policy decisions about allowable harm limits. Few studies quantify population consequences of individual stressors, and fewer quantify synergistic effects. Incorporating all sources of uncertainty can cause predictions to span the range from negligible to catastrophic. Two places were identified to bound this problem through energetic mechanisms that reduce prey available to individuals. First, the US Marine Mammal Protection Act's Potential Biological Removal (PBR) equation was used as a placeholder allowable harm limit to represent the number of animals that can be removed annually without depleting a population below agreed-upon management targets. That rephrased the research question from, “How big could cumulative impacts be?” to “How big would cumulative impacts have to be to exceed an agreed-upon threshold?” Secondly, two data-rich case studies, namely Gulf of Maine humpback and northeast Pacific resident killer whales, were used as examples to parameterize the weakest link, namely between prey availability and demography. Given no additional information, the model predicted that human activities need only reduce prey available to the killer whale population by ~10% to cause a population-level take, through reduced fecundity and/or survival, equivalent to PBR. By contrast, in the humpback population, reduction in prey availability of ~50% was needed to cause a similar, PBR-sized effect. The paper describes an approach – results are merely illustrative. The two case studies differ in prey specialization, life history, and, no doubt, proximity to carrying capacity. This method of inverting the problem refocuses discussions around what the level of prey depletion – via competition with commercial fisheries, displacement from feeding areas through noise-generating activities, or acoustic masking of signals used to detect prey – would have to occur to exceed allowable harm limits set for lethal takes in fisheries or other, more easily quantifiable, human activities.Bayesian sequential tests of the initial size of a linear pure death process
http://hdl.handle.net/10023/8286
We provide a recursive algorithm for determining the sampling plans of invariant Bayesian sequential tests of the initial size of a linear pure death process of unknown rate. These tests compare favourably with the corresponding truncated sequential probability ratio tests.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/10023/82862015-05-01T00:00:00ZGoudie, I.B.J.We provide a recursive algorithm for determining the sampling plans of invariant Bayesian sequential tests of the initial size of a linear pure death process of unknown rate. These tests compare favourably with the corresponding truncated sequential probability ratio tests.Using species proportions to quantify turnover in biodiversity
http://hdl.handle.net/10023/8033
Quantifying species turnover is an important aspect of biodiversity monitoring. Turnover measures are usually based on species presence/absence data, reflecting the rate at which species are replaced. However, measures that reflect the rate at which individuals of a species are replaced by individuals of another species are far more sensitive to change. In this paper, we propose families of turnover measures that reflect changes in species proportions. We study the properties of our measures, and use simulation to assess their success in detecting turnover. Using data on the British farmland bird community from the breeding bird survey, we evaluate our measures to quantify temporal turnover and how it varies across the British mainland.
We are very grateful to all the volunteers who have contributed to the BBS. Yuan was funded by EPSRC/NERC grant EP/1000917/1. Harrison was funded by the Scottish Government’s Centre of Expertise ClimateXChange (www.climatexchange.org.uk).
Wed, 01 Jun 2016 00:00:00 GMThttp://hdl.handle.net/10023/80332016-06-01T00:00:00ZYuan, YuanBuckland, Stephen TerrenceHarrison, PhilFoss, SergeyJohnston, AlisonQuantifying species turnover is an important aspect of biodiversity monitoring. Turnover measures are usually based on species presence/absence data, reflecting the rate at which species are replaced. However, measures that reflect the rate at which individuals of a species are replaced by individuals of another species are far more sensitive to change. In this paper, we propose families of turnover measures that reflect changes in species proportions. We study the properties of our measures, and use simulation to assess their success in detecting turnover. Using data on the British farmland bird community from the breeding bird survey, we evaluate our measures to quantify temporal turnover and how it varies across the British mainland.Spatial variation in maximum dive depth in gray seals in relation to foraging.
http://hdl.handle.net/10023/6886
Habitat preference maps are a way of representing animals’ space use in two dimensions. For marine animals, the third dimension is an important aspect of spatial ecology. We used dive data from seven gray seals Halichoerus grypus (a primarily benthic forager) collected with GPS phone tags (Sea Mammal Research Unit) to investigate the distribution of the maximum depth visited in each dive. We modeled maximum dive depth as a function of spatiotemporal covariates using a generalized additive mixed model (GAMM) with individual as a random effect. Bathymetry, horizontal displacement, latitude and longitude, Julian day, sediment type, and light conditions accounted for 37% of the variability in the data. Persistent patterns of autocorrelation in the raw data suggest that individual intrinsic rhythm might be an important factor, not captured by external covariates. The strength of using this statistical method to generate spatial predictions of the distribution of maximum dive depth is its applicability to other plunge and pursuit divers. Despite being predictions of a point estimate, these maps provide some insight into the third dimension of habitat use in marine animals. The capacity to predict this aspect of vertical habitat use may help avoid conflict between animal habitat and coastal or offshore developments
Theoni Photopoulou was funded by SMRU Ltd in the form of a Ph.D. studentship, 2008–2012.
Tue, 01 Jul 2014 00:00:00 GMThttp://hdl.handle.net/10023/68862014-07-01T00:00:00ZPhotopoulou, TheoniFedak, MikeThomas, LenMatthiopoulos, JasonHabitat preference maps are a way of representing animals’ space use in two dimensions. For marine animals, the third dimension is an important aspect of spatial ecology. We used dive data from seven gray seals Halichoerus grypus (a primarily benthic forager) collected with GPS phone tags (Sea Mammal Research Unit) to investigate the distribution of the maximum depth visited in each dive. We modeled maximum dive depth as a function of spatiotemporal covariates using a generalized additive mixed model (GAMM) with individual as a random effect. Bathymetry, horizontal displacement, latitude and longitude, Julian day, sediment type, and light conditions accounted for 37% of the variability in the data. Persistent patterns of autocorrelation in the raw data suggest that individual intrinsic rhythm might be an important factor, not captured by external covariates. The strength of using this statistical method to generate spatial predictions of the distribution of maximum dive depth is its applicability to other plunge and pursuit divers. Despite being predictions of a point estimate, these maps provide some insight into the third dimension of habitat use in marine animals. The capacity to predict this aspect of vertical habitat use may help avoid conflict between animal habitat and coastal or offshore developmentsNested row-column designs for near-factorial experiments with two treatment factors and one control treatment
http://hdl.handle.net/10023/6556
This paper presents some methods of designing experiments in a block design with nested rows and columns. The treatments consist of all combinations of levels of two treatment factors, with an additional control treatment.
The authors also thank Queen Mary, University of London, the University of St Andrews and the Poznan University of Life Sciences for financial support. The second author was also supported by the British-Polish Young Scientists Programme, grant WAR/342/116.
Thu, 01 Oct 2015 00:00:00 GMThttp://hdl.handle.net/10023/65562015-10-01T00:00:00ZBailey, Rosemary AnneLacka, AgnieszkaThis paper presents some methods of designing experiments in a block design with nested rows and columns. The treatments consist of all combinations of levels of two treatment factors, with an additional control treatment.Random coeffcient models for complex longitudinal data
http://hdl.handle.net/10023/6386
Longitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data.
Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions.
Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.
Fri, 27 Jun 2014 00:00:00 GMThttp://hdl.handle.net/10023/63862014-06-27T00:00:00ZKidney, DarrenLongitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data.
Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions.
Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.Optimal cross-over designs for full interaction models
http://hdl.handle.net/10023/5768
We consider repeated measurement designs when a residual or carry-over effect may be present in at most one later period. Since assuming an additive model may be unrealistic for some applications and leads to biased estimation of treatment effects, we consider a model with interactions between carry-over and direct treatment effects. When the aim of the experiment is to study the effects of a treatment used alone, we obtain universally optimal approximate designs. We also propose some efficient designs with a reduced number of subjects.
July 2014
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/10023/57682014-11-01T00:00:00ZBailey, Rosemary AnneDruilhet, PierreWe consider repeated measurement designs when a residual or carry-over effect may be present in at most one later period. Since assuming an additive model may be unrealistic for some applications and leads to biased estimation of treatment effects, we consider a model with interactions between carry-over and direct treatment effects. When the aim of the experiment is to study the effects of a treatment used alone, we obtain universally optimal approximate designs. We also propose some efficient designs with a reduced number of subjects.The effects of acoustic misclassification on cetacean species abundance estimation
http://hdl.handle.net/10023/5163
To estimate the density or abundance of a cetacean species using acoustic detection data, it is necessary to correctly identify the species that are detected. Developing an automated species classifier with 100% correct classification rate for any species is likely to stay out of reach. It is therefore necessary to consider the effect of misidentified detections on the number of observed data and consequently on abundance or density estimation, and develop methods to cope with these misidentifications. If misclassification rates are known, it is possible to estimate the true numbers of detected calls without bias. However, misclassification and uncertainties in the level of misclassification increase the variance of the estimates. If the true numbers of calls from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty around the classification probabilities does not have an overly detrimental effect on the overall variance. However, if there is a difference in the encounter rate between species calls and/or a large amount of uncertainty in misclassification rates, then the variance of the estimates becomes very large and this dramatically increases the variance of the final abundance estimate.
This work was funded through the Natural Environment Research Council and SMRU Ltd.
Wed, 25 Dec 2013 00:00:00 GMThttp://hdl.handle.net/10023/51632013-12-25T00:00:00ZCaillat, Marjolaine AnnieThomas, LenGillespie, Douglas MichaelTo estimate the density or abundance of a cetacean species using acoustic detection data, it is necessary to correctly identify the species that are detected. Developing an automated species classifier with 100% correct classification rate for any species is likely to stay out of reach. It is therefore necessary to consider the effect of misidentified detections on the number of observed data and consequently on abundance or density estimation, and develop methods to cope with these misidentifications. If misclassification rates are known, it is possible to estimate the true numbers of detected calls without bias. However, misclassification and uncertainties in the level of misclassification increase the variance of the estimates. If the true numbers of calls from different species are similar, then a small amount of misclassification between species and a small amount of uncertainty around the classification probabilities does not have an overly detrimental effect on the overall variance. However, if there is a difference in the encounter rate between species calls and/or a large amount of uncertainty in misclassification rates, then the variance of the estimates becomes very large and this dramatically increases the variance of the final abundance estimate.Novel methods for species distribution mapping including spatial models in complex regions
http://hdl.handle.net/10023/4514
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment.
The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets.
Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting.
There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises.
Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the
Scottish network of Nature Conservation Marine Protected Areas.
Tue, 05 Nov 2013 00:00:00 GMThttp://hdl.handle.net/10023/45142013-11-05T00:00:00ZScott-Hayward, Lindesay Alexandra SarahSpecies Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment.
The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets.
Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting.
There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises.
Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the
Scottish network of Nature Conservation Marine Protected Areas.Modelling catch sampling uncertainty in fisheries stock assessment : the Atlantic-Iberian sardine case
http://hdl.handle.net/10023/4474
The statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS)
stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from
sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment
models so that management decisions are based on realistic evaluation of the uncertainty about
the status of the stock. The main goal of this study is to develop a stock assessment framework
that accounts for some of the uncertainties associated with the AIS stock that are currently not
integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty
arising from the catch data sampling process.
The central innovation the thesis is the development of a Bayesian integrated stock assessment
(ISA) model, in which an observation model explicitly links stock dynamics parameters
with statistical models for the various types of data observed from catches of the AIS stock.
This allows for systematic and statistically consistent propagation of the uncertainty inherent in
the catch sampling process across the whole stock assessment model, through to estimates of
biomass and stock parameters. The method is tested by simulations and found to provide reliable
and accurate estimates of stock parameters and associated uncertainty, while also outperforming
existing designed-based and model-based estimation approaches.
The method is computationally very demanding and this is an obstacle to its adoption
by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed
and presented in this thesis could provide an important contribution to the improvement in the
evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other
fish stock with similar data and dynamics structure. Furthermore, the models developed in this
study establish a solid conceptual platform to allow future development of more complex models
of fish population dynamics.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/44742013-01-01T00:00:00ZCaneco, BrunoThe statistical assessment of harvested fish populations, such as the Atlantic-Iberian sardine (AIS)
stock, needs to deal with uncertainties inherent in fisheries systems. Uncertainties arising from
sampling errors and stochasticity in stock dynamics must be incorporated in stock assessment
models so that management decisions are based on realistic evaluation of the uncertainty about
the status of the stock. The main goal of this study is to develop a stock assessment framework
that accounts for some of the uncertainties associated with the AIS stock that are currently not
integrated into stock assessment models. In particular, it focuses on accounting for the uncertainty
arising from the catch data sampling process.
The central innovation the thesis is the development of a Bayesian integrated stock assessment
(ISA) model, in which an observation model explicitly links stock dynamics parameters
with statistical models for the various types of data observed from catches of the AIS stock.
This allows for systematic and statistically consistent propagation of the uncertainty inherent in
the catch sampling process across the whole stock assessment model, through to estimates of
biomass and stock parameters. The method is tested by simulations and found to provide reliable
and accurate estimates of stock parameters and associated uncertainty, while also outperforming
existing designed-based and model-based estimation approaches.
The method is computationally very demanding and this is an obstacle to its adoption
by fisheries bodies. Once this obstacle is overcame, the ISA modelling framework developed
and presented in this thesis could provide an important contribution to the improvement in the
evaluation of uncertainty in fisheries stock assessments, not only of the AIS stock, but of any other
fish stock with similar data and dynamics structure. Furthermore, the models developed in this
study establish a solid conceptual platform to allow future development of more complex models
of fish population dynamics.Estimating wildlife distribution and abundance from line transect surveys conducted from platforms of opportunity
http://hdl.handle.net/10023/3727
Line transect data obtained from 'platforms of opportunity' are useful for the monitoring
of long term trends in dolphin populations which occur over vast areas, yet analyses of
such data axe problematic due to violation of fundamental assumptions of line transect
methodology. In this thesis we develop methods which allow estimates of dolphin relative
abundance to be obtained when certain assumptions of line transect sampling are violated.
Generalised additive models are used to model encounter rate and mean school size as
a function of spatially and temporally referenced covariates. The estimated relationship
between the response and the environmental and locational covariates is then used to
obtain a predicted surface for the response over the entire survey region. Given those
predicted surfaces, a density surface can then be obtained and an estimate of abundance
computed by numerically integrating over the entire survey region. This approach is
particularly useful when search effort is not random, in which case standard line transect
methods would yield biased estimates.
Estimates of f (0) (the inverse of the effective strip (half-)width), an essential component
of the line transect estimator, may also be biased due to heterogeneity in detection probabilities.
We developed a conditional likelihood approach in which covariate effects are
directly incorporated into the estimation procedure. Simulation results indicated that the
method performs well in the presence of size-bias. When multiple covariates are used, it
is important that covariate selection be carried out.
As an example we applied the methods described above to eastern tropical Pacific dolphin
stocks. However, uncertainty in stock identification has never been directly incorporated
into methods used to obtain estimates of relative or absolute abundance. Therefore we
illustrate an approach in which trends in dolphin relative abundance axe monitored by
small areas, rather than stocks.
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10023/37272001-01-01T00:00:00ZMarques, Fernanda F. C.Line transect data obtained from 'platforms of opportunity' are useful for the monitoring
of long term trends in dolphin populations which occur over vast areas, yet analyses of
such data axe problematic due to violation of fundamental assumptions of line transect
methodology. In this thesis we develop methods which allow estimates of dolphin relative
abundance to be obtained when certain assumptions of line transect sampling are violated.
Generalised additive models are used to model encounter rate and mean school size as
a function of spatially and temporally referenced covariates. The estimated relationship
between the response and the environmental and locational covariates is then used to
obtain a predicted surface for the response over the entire survey region. Given those
predicted surfaces, a density surface can then be obtained and an estimate of abundance
computed by numerically integrating over the entire survey region. This approach is
particularly useful when search effort is not random, in which case standard line transect
methods would yield biased estimates.
Estimates of f (0) (the inverse of the effective strip (half-)width), an essential component
of the line transect estimator, may also be biased due to heterogeneity in detection probabilities.
We developed a conditional likelihood approach in which covariate effects are
directly incorporated into the estimation procedure. Simulation results indicated that the
method performs well in the presence of size-bias. When multiple covariates are used, it
is important that covariate selection be carried out.
As an example we applied the methods described above to eastern tropical Pacific dolphin
stocks. However, uncertainty in stock identification has never been directly incorporated
into methods used to obtain estimates of relative or absolute abundance. Therefore we
illustrate an approach in which trends in dolphin relative abundance axe monitored by
small areas, rather than stocks.Bayesian point process modelling of ecological communities
http://hdl.handle.net/10023/3710
The modelling of biological communities is important to further the understanding
of species coexistence and the mechanisms involved in maintaining
biodiversity. This involves considering not only interactions between individual
biological organisms, but also the incorporation of covariate information,
if available, in the modelling process. This thesis explores the use
of point processes to model interactions in bivariate point patterns within
a Bayesian framework, and, where applicable, in conjunction with covariate
data. Specifically, we distinguish between symmetric and asymmetric species
interactions and model these using appropriate point processes. In this thesis
we consider both pairwise and area interaction point processes to allow for
inhibitory interactions and both inhibitory and attractive interactions.
It is envisaged that the analyses and innovations presented in this thesis
will contribute to the parsimonious modelling of biological communities.
Fri, 28 Jun 2013 00:00:00 GMThttp://hdl.handle.net/10023/37102013-06-28T00:00:00ZNightingale, Glenna FaithThe modelling of biological communities is important to further the understanding
of species coexistence and the mechanisms involved in maintaining
biodiversity. This involves considering not only interactions between individual
biological organisms, but also the incorporation of covariate information,
if available, in the modelling process. This thesis explores the use
of point processes to model interactions in bivariate point patterns within
a Bayesian framework, and, where applicable, in conjunction with covariate
data. Specifically, we distinguish between symmetric and asymmetric species
interactions and model these using appropriate point processes. In this thesis
we consider both pairwise and area interaction point processes to allow for
inhibitory interactions and both inhibitory and attractive interactions.
It is envisaged that the analyses and innovations presented in this thesis
will contribute to the parsimonious modelling of biological communities.Animal population estimation using mark-recapture and plant-capture
http://hdl.handle.net/10023/3655
Mark-recapture is a method of population estimation that involves capturing a number
of animals from a population of unknown size on several occasions, and marking
those animals that are caught each time. By observing the number of marked
animals that are subsequently seen, estimates of the total population size can be
made. There are various subclasses of the mark-recapture method called the Otis-class
of models (Otis, Burnham, White & Anderson 1978). These relate to the
assumed behaviour of the individuals in the target population.
More recent work has generalised the theory of mark-recapture to the so-called
plant-capture, where a known number of animals are pre-inserted into the target
population. Sampling is then carried out as normal, but with additional information
coming from knowledge of the number of planted individuals.
The theory underpinning plant-capture is less well-developed than mark-recapture,
with the difference on population estimation of the former over the latter not often
tested. This thesis shows that, under fixed and random sample-size models, the
inclusion of plants can improve the mean point population estimation of various
estimators. The estimator of Pathak (1964) is generalised to allow for the inclusion
of plants into the target population. The results show that mean estimates from
most estimators, under most models, can be improved with the inclusion of plants,
and the sample standard deviations of the simulations can be reduced. This improvement
in mean point population estimation is particularly pronounced when
the number of animals captured is low.
Sample coverage, which is the proportion of distinct animals caught during sampling,
is also often sought by practitioners. Given here is a generalisation of the
inverse population estimator of Pathak (1964) to plant-capture and a proposed new
inverse population estimator, which can be used as estimates of the coverage of a
sample.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/36552012-01-01T00:00:00ZGormley, RichardMark-recapture is a method of population estimation that involves capturing a number
of animals from a population of unknown size on several occasions, and marking
those animals that are caught each time. By observing the number of marked
animals that are subsequently seen, estimates of the total population size can be
made. There are various subclasses of the mark-recapture method called the Otis-class
of models (Otis, Burnham, White & Anderson 1978). These relate to the
assumed behaviour of the individuals in the target population.
More recent work has generalised the theory of mark-recapture to the so-called
plant-capture, where a known number of animals are pre-inserted into the target
population. Sampling is then carried out as normal, but with additional information
coming from knowledge of the number of planted individuals.
The theory underpinning plant-capture is less well-developed than mark-recapture,
with the difference on population estimation of the former over the latter not often
tested. This thesis shows that, under fixed and random sample-size models, the
inclusion of plants can improve the mean point population estimation of various
estimators. The estimator of Pathak (1964) is generalised to allow for the inclusion
of plants into the target population. The results show that mean estimates from
most estimators, under most models, can be improved with the inclusion of plants,
and the sample standard deviations of the simulations can be reduced. This improvement
in mean point population estimation is particularly pronounced when
the number of animals captured is low.
Sample coverage, which is the proportion of distinct animals caught during sampling,
is also often sought by practitioners. Given here is a generalisation of the
inverse population estimator of Pathak (1964) to plant-capture and a proposed new
inverse population estimator, which can be used as estimates of the coverage of a
sample.Estimating anglerfish abundance from trawl surveys, and related problems
http://hdl.handle.net/10023/3652
The content of this thesis was motivated by the need to estimate anglerfish abundance
from stratified random trawl surveys of the anglerfish stock which occupies
the northern European shelf (Fernandes et al., 2007). The survey was conducted
annually from 2005 to 2010 in order to obtain age-structured estimates of absolute
abundance for this stock. An estimation method is considered to incorporate statistical models for herding, length-based net retention probability and missing age data and uncertainty from all of these sources in variance estimation.
A key component of abundance estimation is the estimation of capture probability.
Capture probability is estimated from the experimental survey data using various
logistic regression models with haul as a random effect. Conditional on the estimated
capture probability, a number of abundance estimators are developed and applied to
the anglerfish data. The abundance estimators differ in the way that the haul effect is incorporated. The performance of these estimators is investigated by simulation. An estimator with form similar to that conventionally used to estimate abundance from distance sampling surveys is found to perform best.
The estimators developed for the anglerfish survey data which incorporate random
effects in capture probability have wider application than trawl surveys. We examine
the analytic properties of these estimators when the capture/detection probability is
known. We apply these estimators to three different types of survey data in addition
to the anglerfish data, with different forms of random effects and investigate their
performance by simulation. We find that a generalization of the form of estimator
typically used on line transect surveys performs best overall. It has low bias, and
also the lowest bias and mean squared error among all the estimators we considered.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10023/36522012-01-01T00:00:00ZYuan, YuanThe content of this thesis was motivated by the need to estimate anglerfish abundance
from stratified random trawl surveys of the anglerfish stock which occupies
the northern European shelf (Fernandes et al., 2007). The survey was conducted
annually from 2005 to 2010 in order to obtain age-structured estimates of absolute
abundance for this stock. An estimation method is considered to incorporate statistical models for herding, length-based net retention probability and missing age data and uncertainty from all of these sources in variance estimation.
A key component of abundance estimation is the estimation of capture probability.
Capture probability is estimated from the experimental survey data using various
logistic regression models with haul as a random effect. Conditional on the estimated
capture probability, a number of abundance estimators are developed and applied to
the anglerfish data. The abundance estimators differ in the way that the haul effect is incorporated. The performance of these estimators is investigated by simulation. An estimator with form similar to that conventionally used to estimate abundance from distance sampling surveys is found to perform best.
The estimators developed for the anglerfish survey data which incorporate random
effects in capture probability have wider application than trawl surveys. We examine
the analytic properties of these estimators when the capture/detection probability is
known. We apply these estimators to three different types of survey data in addition
to the anglerfish data, with different forms of random effects and investigate their
performance by simulation. We find that a generalization of the form of estimator
typically used on line transect surveys performs best overall. It has low bias, and
also the lowest bias and mean squared error among all the estimators we considered.Mixed effect models in distance sampling
http://hdl.handle.net/10023/3618
Recently, much effort has been expended for improving conventional distance sampling methods, e.g. by replacing the design-based approach with a model-based approach where observed counts are related to environmental covariates (Hedley and Buckland, 2004) or by incorporating covariates in the detection function model (Marques and Buckland, 2003).
While these models have generally been limited to include fixed effects, we propose
four different methods for analysing distance sampling data using mixed effects models. These include an extension of the two-stage approach (Buckland et al., 2009),
where we include site random effects in the second-stage count model to account for
correlated counts at the same sites. We also present two integrated approaches which
include site random effects in the count model. These approaches combine the analysis stages for the detection and count models and allow simultaneous estimation of all
parameters. Furthermore, we develop a detection function model that incorporates
random effects. We also propose a novel Bayesian approach to analysing distance sampling data which uses a Metropolis-Hastings algorithm for updating model parameters and a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm for assessing model uncertainty. Lastly, we propose using hierarchical centering as a novel technique for improving model mixing and hence facilitating an RJMCMC algorithm for mixed models.
We analyse two case studies, both large-scale point transect surveys, where the interest lies in establishing the effects of conservation buffers on agricultural fields. For each case study, we compare the results from one integrated approach to those from
the extended two-stage approach. We find that these may differ in parameter estimates for covariates that were both in the detection and the count model and in model probabilities when model uncertainty was included in inference. The performance of the random effects based detection function is assessed via simulation and when heterogeneity in the data is present, one of the new estimators yields improved results compared to conventional distance sampling estimators.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10023/36182013-01-01T00:00:00ZOedekoven, Cornelia SabrinaRecently, much effort has been expended for improving conventional distance sampling methods, e.g. by replacing the design-based approach with a model-based approach where observed counts are related to environmental covariates (Hedley and Buckland, 2004) or by incorporating covariates in the detection function model (Marques and Buckland, 2003).
While these models have generally been limited to include fixed effects, we propose
four different methods for analysing distance sampling data using mixed effects models. These include an extension of the two-stage approach (Buckland et al., 2009),
where we include site random effects in the second-stage count model to account for
correlated counts at the same sites. We also present two integrated approaches which
include site random effects in the count model. These approaches combine the analysis stages for the detection and count models and allow simultaneous estimation of all
parameters. Furthermore, we develop a detection function model that incorporates
random effects. We also propose a novel Bayesian approach to analysing distance sampling data which uses a Metropolis-Hastings algorithm for updating model parameters and a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm for assessing model uncertainty. Lastly, we propose using hierarchical centering as a novel technique for improving model mixing and hence facilitating an RJMCMC algorithm for mixed models.
We analyse two case studies, both large-scale point transect surveys, where the interest lies in establishing the effects of conservation buffers on agricultural fields. For each case study, we compare the results from one integrated approach to those from
the extended two-stage approach. We find that these may differ in parameter estimates for covariates that were both in the detection and the count model and in model probabilities when model uncertainty was included in inference. The performance of the random effects based detection function is assessed via simulation and when heterogeneity in the data is present, one of the new estimators yields improved results compared to conventional distance sampling estimators.Quantifying biodiversity trends in time and space
http://hdl.handle.net/10023/3414
The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity.
Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability.
Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species.
Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time.
Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.
Fri, 30 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/34142012-11-30T00:00:00ZStudeny, Angelika C.The global loss of biodiversity calls for robust large-scale diversity assessment. Biological diversity is a multi-faceted concept; defined as the “variety of life”, answering questions such as “How much is there?” or more precisely “Have we succeeded in reducing the rate of its decline?” is not straightforward. While various aspects of biodiversity give rise to numerous ways of quantification, we focus on temporal (and spatial) trends and their changes in species diversity.
Traditional diversity indices summarise information contained in the species abundance distribution, i.e. each species' proportional contribution to total abundance. Estimated from data, these indices can be biased if variation in detection probability is ignored. We discuss differences between diversity indices and demonstrate possible adjustments for detectability.
Additionally, most indices focus on the most abundant species in ecological communities. We introduce a new set of diversity measures, based on a family of goodness-of-fit statistics. A function of a free parameter, this family allows us to vary the sensitivity of these measures to dominance and rarity of species.
Their performance is studied by assessing temporal trends in diversity for five communities of British breeding birds based on 14 years of survey data, where they are applied alongside the current headline index, a geometric mean of relative abundances. Revealing the contributions of both rare and common species to biodiversity trends, these "goodness-of-fit" measures provide novel insights into how ecological communities change over time.
Biodiversity is not only subject to temporal changes, but it also varies across space. We take first steps towards estimating spatial diversity trends. Finally, processes maintaining biodiversity act locally, at specific spatial scales. Contrary to abundance-based summary statistics, spatial characteristics of ecological communities may distinguish these processes. We suggest a generalisation to a spatial summary, the cross-pair overlap distribution, to render it more flexible to spatial scale.Finite and infinite ergodic theory for linear and conformal dynamical systems
http://hdl.handle.net/10023/3220
The first main topic of this thesis is the thorough analysis of two families of piecewise linear
maps on the unit interval, the α-Lüroth and α-Farey maps. Here, α denotes a countably infinite
partition of the unit interval whose atoms only accumulate at the origin. The basic properties
of these maps will be developed, including that each α-Lüroth map (denoted Lα) gives rise to a
series expansion of real numbers in [0,1], a certain type of Generalised Lüroth Series. The first
example of such an expansion was given by Lüroth. The map Lα is the jump transformation
of the corresponding α-Farey map Fα. The maps Lα and Fα share the same relationship as the
classical Farey and Gauss maps which give rise to the continued fraction expansion of a real
number. We also consider the topological properties of Fα and some Diophantine-type sets of
numbers expressed in terms of the α-Lüroth expansion.
Next we investigate certain ergodic-theoretic properties of the maps Lα and Fα. It will turn
out that the Lebesgue measure λ is invariant for every map Lα and that there exists a unique
Lebesgue-absolutely continuous invariant measure for Fα. We will give a precise expression for
the density of this measure. Our main result is that both Lα and Fα are exact, and thus ergodic.
The interest in the invariant measure for Fα lies in the fact that under a particular condition on
the underlying partition α, the invariant measure associated to the map Fα is infinite.
Then we proceed to introduce and examine the sequence of α-sum-level sets arising from
the α-Lüroth map, for an arbitrary given partition α. These sets can be written dynamically in
terms of Fα. The main result concerning the α-sum-level sets is to establish weak and strong
renewal laws. Note that for the Farey map and the Gauss map, the analogue of this result has
been obtained by Kesseböhmer and Stratmann. There the results were derived by using advanced
infinite ergodic theory, rather than the strong renewal theorems employed here. This underlines
the fact that one of the main ingredients of infinite ergodic theory is provided by some delicate
estimates in renewal theory.
Our final main result concerning the α-Lüroth and α-Farey systems is to provide a fractal-geometric
description of the Lyapunov spectra associated with each of the maps Lα and Fα.
The Lyapunov spectra for the Farey map and the Gauss map have been investigated in detail by
Kesseböhmer and Stratmann. The Farey map and the Gauss map are non-linear, whereas the
systems we consider are always piecewise linear. However, since our analysis is based on a large
family of different partitions of U , the class of maps which we consider in this paper allows us
to detect a variety of interesting new phenomena, including that of phase transitions.
Finally, we come to the conformal systems of the title. These are the limit sets of discrete
subgroups of the group of isometries of the hyperbolic plane. For these so-called Fuchsian
groups, our first main result is to establish the Hausdorff dimension of some Diophantine-type
sets contained in the limit set that are similar to those considered for the maps Lα. These sets
are then used in our second main result to analyse the more geometrically defined strict-Jarník
limit set of a Fuchsian group. Finally, we obtain a “weak multifractal spectrum” for the Patterson
measure associated to the Fuchsian group.
Wed, 30 Nov 2011 00:00:00 GMThttp://hdl.handle.net/10023/32202011-11-30T00:00:00ZMunday, SaraThe first main topic of this thesis is the thorough analysis of two families of piecewise linear
maps on the unit interval, the α-Lüroth and α-Farey maps. Here, α denotes a countably infinite
partition of the unit interval whose atoms only accumulate at the origin. The basic properties
of these maps will be developed, including that each α-Lüroth map (denoted Lα) gives rise to a
series expansion of real numbers in [0,1], a certain type of Generalised Lüroth Series. The first
example of such an expansion was given by Lüroth. The map Lα is the jump transformation
of the corresponding α-Farey map Fα. The maps Lα and Fα share the same relationship as the
classical Farey and Gauss maps which give rise to the continued fraction expansion of a real
number. We also consider the topological properties of Fα and some Diophantine-type sets of
numbers expressed in terms of the α-Lüroth expansion.
Next we investigate certain ergodic-theoretic properties of the maps Lα and Fα. It will turn
out that the Lebesgue measure λ is invariant for every map Lα and that there exists a unique
Lebesgue-absolutely continuous invariant measure for Fα. We will give a precise expression for
the density of this measure. Our main result is that both Lα and Fα are exact, and thus ergodic.
The interest in the invariant measure for Fα lies in the fact that under a particular condition on
the underlying partition α, the invariant measure associated to the map Fα is infinite.
Then we proceed to introduce and examine the sequence of α-sum-level sets arising from
the α-Lüroth map, for an arbitrary given partition α. These sets can be written dynamically in
terms of Fα. The main result concerning the α-sum-level sets is to establish weak and strong
renewal laws. Note that for the Farey map and the Gauss map, the analogue of this result has
been obtained by Kesseböhmer and Stratmann. There the results were derived by using advanced
infinite ergodic theory, rather than the strong renewal theorems employed here. This underlines
the fact that one of the main ingredients of infinite ergodic theory is provided by some delicate
estimates in renewal theory.
Our final main result concerning the α-Lüroth and α-Farey systems is to provide a fractal-geometric
description of the Lyapunov spectra associated with each of the maps Lα and Fα.
The Lyapunov spectra for the Farey map and the Gauss map have been investigated in detail by
Kesseböhmer and Stratmann. The Farey map and the Gauss map are non-linear, whereas the
systems we consider are always piecewise linear. However, since our analysis is based on a large
family of different partitions of U , the class of maps which we consider in this paper allows us
to detect a variety of interesting new phenomena, including that of phase transitions.
Finally, we come to the conformal systems of the title. These are the limit sets of discrete
subgroups of the group of isometries of the hyperbolic plane. For these so-called Fuchsian
groups, our first main result is to establish the Hausdorff dimension of some Diophantine-type
sets contained in the limit set that are similar to those considered for the maps Lα. These sets
are then used in our second main result to analyse the more geometrically defined strict-Jarník
limit set of a Fuchsian group. Finally, we obtain a “weak multifractal spectrum” for the Patterson
measure associated to the Fuchsian group.Spatial patterns and species coexistence : using spatial statistics to identify underlying ecological processes in plant communities
http://hdl.handle.net/10023/3084
The use of spatial statistics to investigate ecological processes in plant communities is becoming increasingly widespread. In diverse communities such as tropical rainforests, analysis of spatial structure may help to unravel the various processes that act and interact to maintain high levels of diversity. In particular, a number of contrasting mechanisms have been suggested to explain species coexistence, and these differ greatly in their practical implications for the ecology and conservation of tropical forests. Traditional first-order measures of community structure have proved unable to distinguish these mechanisms in practice, but statistics that describe spatial structure may be able to do so. This is of great interest and relevance as spatially explicit data become available for a range of ecological communities and analysis methods for these data become more accessible.
This thesis investigates the potential for inference about underlying ecological processes in plant communities using spatial statistics. Current methodologies for spatial analysis are reviewed and extended, and are used to characterise the spatial signals of the principal theorised mechanisms of coexistence. The sensitivity of a range of spatial statistics to these signals is assessed, and the strength of such signals in natural communities is investigated.
The spatial signals of the processes considered here are found to be strong and robust to modelled stochastic variation. Several new and existing spatial statistics are found to be sensitive to these signals, and offer great promise for inference about underlying processes from empirical data. The relative strengths of particular processes are found to vary between natural communities, with any one theory being insufficient to explain observed patterns. This thesis extends both understanding of species coexistence in diverse plant communities and the methodology for assessing underlying process in particular cases. It demonstrates that the potential of spatial statistics in ecology is great and largely unexplored.
Thu, 01 Nov 2012 00:00:00 GMThttp://hdl.handle.net/10023/30842012-11-01T00:00:00ZBrown, CalumThe use of spatial statistics to investigate ecological processes in plant communities is becoming increasingly widespread. In diverse communities such as tropical rainforests, analysis of spatial structure may help to unravel the various processes that act and interact to maintain high levels of diversity. In particular, a number of contrasting mechanisms have been suggested to explain species coexistence, and these differ greatly in their practical implications for the ecology and conservation of tropical forests. Traditional first-order measures of community structure have proved unable to distinguish these mechanisms in practice, but statistics that describe spatial structure may be able to do so. This is of great interest and relevance as spatially explicit data become available for a range of ecological communities and analysis methods for these data become more accessible.
This thesis investigates the potential for inference about underlying ecological processes in plant communities using spatial statistics. Current methodologies for spatial analysis are reviewed and extended, and are used to characterise the spatial signals of the principal theorised mechanisms of coexistence. The sensitivity of a range of spatial statistics to these signals is assessed, and the strength of such signals in natural communities is investigated.
The spatial signals of the processes considered here are found to be strong and robust to modelled stochastic variation. Several new and existing spatial statistics are found to be sensitive to these signals, and offer great promise for inference about underlying processes from empirical data. The relative strengths of particular processes are found to vary between natural communities, with any one theory being insufficient to explain observed patterns. This thesis extends both understanding of species coexistence in diverse plant communities and the methodology for assessing underlying process in particular cases. It demonstrates that the potential of spatial statistics in ecology is great and largely unexplored.Estimating abundance of rare, small mammals: A case study of the Key Largo woodrat (Neotoma floridana smalli)
http://hdl.handle.net/10023/2068
Estimates of animal abundance or density are fundamental quantities in ecology and conservation, but for many species such as rare, small mammals, obtaining robust estimates is problematic. In this thesis, I combine elements of two standard abundance estimation methods, capture-recapture and distance sampling, to develop a method called trapping point transects (TPT). In TPT, a "detection function", g(r) (i.e. the probability of capturing an animal, given it is r m from a trap when the trap is set) is estimated using a subset of animals whose locations are known prior to traps being set. Generalised linear models are used to estimate the detection function, and the model can be extended to include random effects to allow for heterogeneity in capture probabilities. Standard point transect methods are modified to estimate abundance. Two abundance estimators are available. The first estimator is based on the reciprocal of the expected probability of detecting an animal, ^P, where the expectation is over r;
whereas the second estimator is the expectation of the reciprocal of ^P.
Performance of the TPT method under various sampling efforts and underlying true detection probabilities of individuals in the population was investigated in a simulation study. When underlying probability of detection was high (g(0) = 0:88) and between-individual variation was small, survey effort could be surprisingly low (c. 510 trap nights) to yield low bias (c. 4%) in the two estimators;
but under certain situations, the second estimator can be extremely biased. Uncertainty and relative bias in population estimates increased with decreasing detectability and increasing between-individual variation.
Abundance of the Key Largo woodrat (Neotoma floridana smalli), an endangered rodent with a restricted geographic range, was estimated using TPT. The TPT method compared well to other viable methods (capture-recapture and spatially-explicit capture-recapture), in terms of both field practicality and cost. The TPT method may generally be useful in estimating animal abundance in trapping studies and variants of the TPT method are presented.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10023/20682011-01-01T00:00:00ZPotts, Joanne M.Estimates of animal abundance or density are fundamental quantities in ecology and conservation, but for many species such as rare, small mammals, obtaining robust estimates is problematic. In this thesis, I combine elements of two standard abundance estimation methods, capture-recapture and distance sampling, to develop a method called trapping point transects (TPT). In TPT, a "detection function", g(r) (i.e. the probability of capturing an animal, given it is r m from a trap when the trap is set) is estimated using a subset of animals whose locations are known prior to traps being set. Generalised linear models are used to estimate the detection function, and the model can be extended to include random effects to allow for heterogeneity in capture probabilities. Standard point transect methods are modified to estimate abundance. Two abundance estimators are available. The first estimator is based on the reciprocal of the expected probability of detecting an animal, ^P, where the expectation is over r;
whereas the second estimator is the expectation of the reciprocal of ^P.
Performance of the TPT method under various sampling efforts and underlying true detection probabilities of individuals in the population was investigated in a simulation study. When underlying probability of detection was high (g(0) = 0:88) and between-individual variation was small, survey effort could be surprisingly low (c. 510 trap nights) to yield low bias (c. 4%) in the two estimators;
but under certain situations, the second estimator can be extremely biased. Uncertainty and relative bias in population estimates increased with decreasing detectability and increasing between-individual variation.
Abundance of the Key Largo woodrat (Neotoma floridana smalli), an endangered rodent with a restricted geographic range, was estimated using TPT. The TPT method compared well to other viable methods (capture-recapture and spatially-explicit capture-recapture), in terms of both field practicality and cost. The TPT method may generally be useful in estimating animal abundance in trapping studies and variants of the TPT method are presented.Bayesian modelling of integrated data and its application to seabird populations
http://hdl.handle.net/10023/1635
Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework.
This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period.
Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.
Tue, 30 Nov 2010 00:00:00 GMThttp://hdl.handle.net/10023/16352010-11-30T00:00:00ZReynolds, Toby J.Integrated data analyses are becoming increasingly popular in studies of wild animal populations where two or more separate sources of data contain information about common parameters. Here we develop an integrated population model using abundance and demographic data from a study of common guillemots (Uria aalge) on the Isle of May, southeast Scotland. A state-space model for the count data is supplemented by three demographic time series (productivity and two mark-recapture-recovery (MRR)), enabling the estimation of prebreeder emigration rate - a parameter for which there is no direct observational data, and which is unidentifiable in the separate analysis of MRR data. A Bayesian approach using MCMC provides a flexible and powerful analysis framework.
This model is extended to provide predictions of future population trajectories. Adopting random effects models for the survival and productivity parameters, we implement the MCMC algorithm to obtain a posterior sample of the underlying process means and variances (and population sizes) within the study period. Given this sample, we predict future demographic parameters, which in turn allows us to predict future population sizes and obtain the corresponding posterior distribution. Under the assumption that recent, unfavourable conditions persist in the future, we obtain a posterior probability of 70% that there is a population decline of >25% over a 10-year period.
Lastly, using MRR data we test for spatial, temporal and age-related correlations in guillemot survival among three widely separated Scottish colonies that have varying overlap in nonbreeding distribution. We show that survival is highly correlated over time for colonies/age classes sharing wintering areas, and essentially uncorrelated for those with separate wintering areas. These results strongly suggest that one or more aspects of winter environment are responsible for spatiotemporal variation in survival of British guillemots, and provide insight into the factors driving multi-population dynamics of the species.Statistical models for the long-term monitoring of songbird populations: a Bayesian analysis of constant effort sites and ring-recovery data
http://hdl.handle.net/10023/885
To underpin and improve advice given to government and other interested parties
on the state of Britain’s common songbird populations, new models for
analysing ecological data are developed in this thesis. These models use data
from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme,
an annual bird-ringing programme in which catch effort is standardised. Data
from the CES scheme are routinely used to index abundance and productivity,
and to a lesser extent estimate adult survival rates. However, two features of
the CES data that complicate analysis were previously inadequately addressed,
namely the presence in the catch of “transient” birds not associated with the
local population, and the sporadic failure in the constancy of effort assumption
arising from the absence of within-year catch data. The current methodology
is extended, with efficient Bayesian models developed for each of these demographic
parameters that account for both of these data nuances, and from which
reliable and usefully precise estimates are obtained.
Of increasing interest is the relationship between abundance and the underlying
vital rates, an understanding of which facilitates effective conservation.
CES data are particularly amenable to an integrated approach to population
modelling, providing a combination of demographic information from a single
source. Such an integrated approach is developed here, employing Bayesian
methodology and a simple population model to unite abundance, productivity
and survival within a consistent framework. Independent data from ring-recoveries
provide additional information on adult and juvenile survival rates.
Specific advantages of this new integrated approach are identified, among which
is the ability to determine juvenile survival accurately, disentangle the probabilities
of survival and permanent emigration, and to obtain estimates of total
seasonal productivity.
The methodologies developed in this thesis are applied to CES data from Sedge
Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.
Fri, 25 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8852010-06-25T00:00:00ZCave, Vanessa M.To underpin and improve advice given to government and other interested parties
on the state of Britain’s common songbird populations, new models for
analysing ecological data are developed in this thesis. These models use data
from the British Trust for Ornithology’s Constant Effort Sites (CES) scheme,
an annual bird-ringing programme in which catch effort is standardised. Data
from the CES scheme are routinely used to index abundance and productivity,
and to a lesser extent estimate adult survival rates. However, two features of
the CES data that complicate analysis were previously inadequately addressed,
namely the presence in the catch of “transient” birds not associated with the
local population, and the sporadic failure in the constancy of effort assumption
arising from the absence of within-year catch data. The current methodology
is extended, with efficient Bayesian models developed for each of these demographic
parameters that account for both of these data nuances, and from which
reliable and usefully precise estimates are obtained.
Of increasing interest is the relationship between abundance and the underlying
vital rates, an understanding of which facilitates effective conservation.
CES data are particularly amenable to an integrated approach to population
modelling, providing a combination of demographic information from a single
source. Such an integrated approach is developed here, employing Bayesian
methodology and a simple population model to unite abundance, productivity
and survival within a consistent framework. Independent data from ring-recoveries
provide additional information on adult and juvenile survival rates.
Specific advantages of this new integrated approach are identified, among which
is the ability to determine juvenile survival accurately, disentangle the probabilities
of survival and permanent emigration, and to obtain estimates of total
seasonal productivity.
The methodologies developed in this thesis are applied to CES data from Sedge
Warbler, Acrocephalus schoenobaenus, and Reed Warbler, A. scirpaceus.Topics in estimation of quantum channels
http://hdl.handle.net/10023/869
A quantum channel is a mapping which sends density matrices to density
matrices. The estimation of quantum channels is of great importance to the
field of quantum information. In this thesis two topics related to estimation
of quantum channels are investigated. The first of these is the upper
bound of Sarovar and Milburn (2006) on the Fisher information obtainable
by measuring the output of a channel. Two questions raised by Sarovar and
Milburn about their bound are answered. A Riemannian metric on the space
of quantum states is introduced, related to the construction of the Sarovar
and Milburn bound. Its properties are characterized.
The second topic investigated is the estimation of unitary channels. The
situation is considered in which an experimenter has several non-identical
unitary channels that have the same parameter. It is shown that it is possible
to improve estimation using the channels together, analogous to the case of
identical unitary channels. Also, a new method of phase estimation is given
based on a method sketched by Kitaev (1996). Unlike other phase estimation
procedures which perform similarly, this procedure requires only very basic
experimental resources.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8692010-06-23T00:00:00ZO'Loan, Caleb J.A quantum channel is a mapping which sends density matrices to density
matrices. The estimation of quantum channels is of great importance to the
field of quantum information. In this thesis two topics related to estimation
of quantum channels are investigated. The first of these is the upper
bound of Sarovar and Milburn (2006) on the Fisher information obtainable
by measuring the output of a channel. Two questions raised by Sarovar and
Milburn about their bound are answered. A Riemannian metric on the space
of quantum states is introduced, related to the construction of the Sarovar
and Milburn bound. Its properties are characterized.
The second topic investigated is the estimation of unitary channels. The
situation is considered in which an experimenter has several non-identical
unitary channels that have the same parameter. It is shown that it is possible
to improve estimation using the channels together, analogous to the case of
identical unitary channels. Also, a new method of phase estimation is given
based on a method sketched by Kitaev (1996). Unlike other phase estimation
procedures which perform similarly, this procedure requires only very basic
experimental resources.Multi-species state-space modelling of the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) in Scotland
http://hdl.handle.net/10023/864
State-space modelling is a powerful tool to study ecological systems. The direct inclusion of uncertainty, unification of models and data, and ability to model unobserved, hidden states increases our knowledge about the environment and provides
new ecological insights. I extend the state-space framework to create multi-species
models, showing that the ability to model ecosystem interactions is limited only by data availability. State-space models are fit using both Bayesian and Frequentist methods, making them independent of a statistical school of thought. Bayesian approaches can have the advantage in their ability to account for missing data and fit hierarchical structures
and models with many parameters to limited data; often the case in ecological studies.
I have taken a Bayesian model fitting approach in this thesis.
The predator-prey interactions between the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) are used to demonstrate state-space modelling’s
capabilities. The harrier data are believed to be known without error, while missing
data make the cyclic dynamics of the grouse harder to model. The grouse-harrier interactions are modelled in a multi-species state-space model, rather than including
one species as a covariate in the other’s model. Finally, models are included for the
harriers’ alternate prey.
The single- and multi-species state-space models for the predator-prey interactions
provide insight into the species’ management. The models investigate aspects of the species’ behaviour, from the mechanisms behind grouse cycles to what motivates harrier immigration. The inferences drawn from these models are applicable to management, suggesting actions to halt grouse cycles or mitigate the grouse-harrier conflict. Overall, the multi-species models suggest that two popular ideas for grouse-harrier management, diversionary feeding and habitat manipulation to reduce alternate prey densities, will not have the desired effect, and in the case of reducing prey densities, may even increase the harriers’ impact on grouse chicks.
Wed, 23 Jun 2010 00:00:00 GMThttp://hdl.handle.net/10023/8642010-06-23T00:00:00ZNew, Leslie FrancesState-space modelling is a powerful tool to study ecological systems. The direct inclusion of uncertainty, unification of models and data, and ability to model unobserved, hidden states increases our knowledge about the environment and provides
new ecological insights. I extend the state-space framework to create multi-species
models, showing that the ability to model ecosystem interactions is limited only by data availability. State-space models are fit using both Bayesian and Frequentist methods, making them independent of a statistical school of thought. Bayesian approaches can have the advantage in their ability to account for missing data and fit hierarchical structures
and models with many parameters to limited data; often the case in ecological studies.
I have taken a Bayesian model fitting approach in this thesis.
The predator-prey interactions between the hen harrier (Circus cyaneus) and red grouse (Lagopus lagopus scoticus) are used to demonstrate state-space modelling’s
capabilities. The harrier data are believed to be known without error, while missing
data make the cyclic dynamics of the grouse harder to model. The grouse-harrier interactions are modelled in a multi-species state-space model, rather than including
one species as a covariate in the other’s model. Finally, models are included for the
harriers’ alternate prey.
The single- and multi-species state-space models for the predator-prey interactions
provide insight into the species’ management. The models investigate aspects of the species’ behaviour, from the mechanisms behind grouse cycles to what motivates harrier immigration. The inferences drawn from these models are applicable to management, suggesting actions to halt grouse cycles or mitigate the grouse-harrier conflict. Overall, the multi-species models suggest that two popular ideas for grouse-harrier management, diversionary feeding and habitat manipulation to reduce alternate prey densities, will not have the desired effect, and in the case of reducing prey densities, may even increase the harriers’ impact on grouse chicks.Distance software: design and analysis of distance sampling surveys for estimating population size
http://hdl.handle.net/10023/817
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial pre-requisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: CDS (conventional distance sampling), which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; MCDS (multiple covariate distance sampling), which allows covariates in addition to distance; and MRDS (mark-recapture distance sampling), which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the DSM (density surface modelling) analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10023/8172010-01-01T00:00:00ZThomas, LenBuckland, Stephen TerrenceRexstad, EricLaake, J LStrindberg, SHedley, S LBishop, J R BMarques, Tiago A.1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial pre-requisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: CDS (conventional distance sampling), which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; MCDS (multiple covariate distance sampling), which allows covariates in addition to distance; and MRDS (mark-recapture distance sampling), which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the DSM (density surface modelling) analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.Embedding population dynamics in mark-recapture models
http://hdl.handle.net/10023/718
Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states.
Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods.
Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period.
An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods.
Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.
Wed, 24 Jun 2009 00:00:00 GMThttp://hdl.handle.net/10023/7182009-06-24T00:00:00ZBishop, Jonathan R. B.Mark-recapture methods use repeated captures of individually identifiable animals to provide estimates of properties of populations. Different models allow estimates to be obtained for population size and rates of processes governing population dynamics. State-space models consist of two linked processes evolving simultaneously over time. The state process models the evolution of the true, but unknown, states of the population. The observation process relates observations on the population to these true states.
Mark-recapture models specified within a state-space framework allow population dynamics models to be embedded in inference ensuring that estimated changes in the population are consistent with assumptions regarding the biology of the modelled population. This overcomes a limitation of current mark-recapture methods.
Two alternative approaches are considered. The "conditional" approach conditions on known numbers of animals possessing capture history patterns including capture in the current time period. An animal's capture history determines its state; consequently, capture parameters appear in the state process rather than the observation process. There is no observation error in the model. Uncertainty occurs only through the numbers of animals not captured in the current time period.
An "unconditional" approach is considered in which the capture histories are regarded as observations. Consequently, capture histories do not influence an animal's state and capture probability parameters appear in the observation process. Capture histories are considered a random realization of the stochastic observation process. This is more consistent with traditional mark-recapture methods.
Development and implementation of particle filtering techniques for fitting these models under each approach are discussed. Simulation studies show reasonable performance for the unconditional approach and highlight problems with the conditional approach. Strengths and limitations of each approach are outlined, with reference to Soay sheep data analysis, and suggestions are presented for future analyses.The importance of analysis method for breeding bird survey population trend estimates
http://hdl.handle.net/10023/685
Population trends from the Breeding Bird Survey are widely used to focus conservation efforts on species thought to be in decline and to test preliminary hypotheses regarding the causes of these declines. A number of statistical methods have been used to estimate population trends, but there is no consensus us to which is the most reliable. We quantified differences in trend estimates or different analysis methods applied to the same subset of Breeding Bird Survey data. We estimated trends for 115 species in British Columbia using three analysis methods: U.S. National Biological Service route regression, Canadian Wildlife Service route regression, and nonparametric rank-trends analysis. Overall, the number of species estimated to be declining was similar among the three methods, but the number of statistically significant declines was not similar (15, 8, and 29 respectively). In addition, many differences existed among methods in the trend estimates assigned to individual species. Comparing the two route regression methods, Canadian Wildlife Service estimates had a greater absolute magnitude on average than those of the U.S. National Biological Service method. U.S. National Biological Service estimates were on average more positive than the Canadian Wildlife Service estimates when the respective agency's data selection criteria were applied separately. These results imply that our ability to detect population declines and to prioritize species of conservation concern depend strongly upon the analysis method used. This highlights the need for further research to determine how best to accurately estimate trends from the data. We suggest a method for evaluating the performance of the analysis methods by using simulated Breeding Bird Survey data.
Mon, 01 Jan 1996 00:00:00 GMThttp://hdl.handle.net/10023/6851996-01-01T00:00:00ZThomas, LenMartin, KathyPopulation trends from the Breeding Bird Survey are widely used to focus conservation efforts on species thought to be in decline and to test preliminary hypotheses regarding the causes of these declines. A number of statistical methods have been used to estimate population trends, but there is no consensus us to which is the most reliable. We quantified differences in trend estimates or different analysis methods applied to the same subset of Breeding Bird Survey data. We estimated trends for 115 species in British Columbia using three analysis methods: U.S. National Biological Service route regression, Canadian Wildlife Service route regression, and nonparametric rank-trends analysis. Overall, the number of species estimated to be declining was similar among the three methods, but the number of statistically significant declines was not similar (15, 8, and 29 respectively). In addition, many differences existed among methods in the trend estimates assigned to individual species. Comparing the two route regression methods, Canadian Wildlife Service estimates had a greater absolute magnitude on average than those of the U.S. National Biological Service method. U.S. National Biological Service estimates were on average more positive than the Canadian Wildlife Service estimates when the respective agency's data selection criteria were applied separately. These results imply that our ability to detect population declines and to prioritize species of conservation concern depend strongly upon the analysis method used. This highlights the need for further research to determine how best to accurately estimate trends from the data. We suggest a method for evaluating the performance of the analysis methods by using simulated Breeding Bird Survey data.Retrospective power analysis
http://hdl.handle.net/10023/679
Many papers have appeared in the recent biological literature encouraging us to incorporate statistical power analysis into our hypothesis testing protocol (Peterman 1990; Fairweather 1991; Muller & Benignus 1992; Taylor & Gerrodette 1993; Searcy-Bernal 1994; Thomas & Juanes 1996). The importance of doing a power analysis before beginning a study (prospective power analysis) is universally accepted: such analyses help us to decide how many samples are required to have a good chance of getting unambiguous results. In contrast, the role of power analysis after the data are collected and analyzed (retrospective power analysis) is controversial, as is evidenced by the papers of Reed and Blaustein (1995) and Hayes and Steidl (1997). The controversy is over the use of information from the sample data in retrospective power calculations. As I will show, the type of information used has fundamental implications for the value of such analyses. I compare the approaches to calculating retrospective power, noting the strengths and weaknesses of each, and make general recommendations as to how and when retrospective power analyses should be conducted.
The pdf contains the article; the ASCII file contains SAS code to calculate power and confidence limits for simple linear regression, as detailed in the article appendix.
Wed, 01 Jan 1997 00:00:00 GMThttp://hdl.handle.net/10023/6791997-01-01T00:00:00ZThomas, LenMany papers have appeared in the recent biological literature encouraging us to incorporate statistical power analysis into our hypothesis testing protocol (Peterman 1990; Fairweather 1991; Muller & Benignus 1992; Taylor & Gerrodette 1993; Searcy-Bernal 1994; Thomas & Juanes 1996). The importance of doing a power analysis before beginning a study (prospective power analysis) is universally accepted: such analyses help us to decide how many samples are required to have a good chance of getting unambiguous results. In contrast, the role of power analysis after the data are collected and analyzed (retrospective power analysis) is controversial, as is evidenced by the papers of Reed and Blaustein (1995) and Hayes and Steidl (1997). The controversy is over the use of information from the sample data in retrospective power calculations. As I will show, the type of information used has fundamental implications for the value of such analyses. I compare the approaches to calculating retrospective power, noting the strengths and weaknesses of each, and make general recommendations as to how and when retrospective power analyses should be conducted.A unified framework for modelling wildlife population dynamics
http://hdl.handle.net/10023/678
This paper proposes a unified framework for defining and fitting stochastic, discrete-time, discrete-stage population dynamics models. The biological system is described by a state–space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British Grey Seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.
The pdf document contains the full article text; program code (in S-PLUS 6.1) for the example analysis is in the three text files; data is available from the Sea Mammal Research Unit (http://www.smru.st-and.ac.uk)
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10023/6782005-01-01T00:00:00ZThomas, LenBuckland, Stephen T.Newman, KBHarwood, JohnThis paper proposes a unified framework for defining and fitting stochastic, discrete-time, discrete-stage population dynamics models. The biological system is described by a state–space model, where the true but unknown state of the population is modelled by a state process, and this is linked to survey data by an observation process. All sources of uncertainty in the inputs, including uncertainty about model specification, are readily incorporated. The paper shows how the state process can be represented as a generalization of the standard Leslie or Lefkovitch matrix. By dividing the state process into subprocesses, complex models can be constructed from manageable building blocks. The paper illustrates the approach with a model of the British Grey Seal metapopulation, using sequential importance sampling with kernel smoothing to fit the model.WinBUGS for population ecologists: Bayesian modeling using Markov Chain Monte Carlo methods.
http://hdl.handle.net/10023/677
The computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden Württemberg, Germany, to demonstrate the use of mark-recapture models to estimate survival, and also how to cope with unexplained variance through random effects. Recent advances in methodology and also the WinBUGS software allow us to introduce (i) a flexible way of incorporating covariates using spline smoothing and (ii) a method to deal with missing values in covariates. The second example shows how to estimate population density while accounting for detectability, using distance sampling methods applied to a test dataset collected on a known population of wooden stakes. Finally, the third case study involves the use of state-space models of wildlife population dynamics to make inferences about density dependence in a North American duck species. Reversible Jump MCMC is used to calculate the probability of various candidate models. For all examples, data and WinBUGS code are provided.
This paper was presented at the EURING 2007 Technical Meeting, January 14-21, Dunedin, New Zealand. It has been submitted for publication in the conference proceedings, which will appear as a special issue of Environmental and Ecological Statistics.; The zip file contains accompanying code in WinBUGS
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/6772008-01-01T00:00:00ZGiminez, OBonner, S JKing, RuthParker, R ABrooks, S PJamieson, L EGrosbois, VMorgan, B J TThomas, LenThe computer package WinBUGS is introduced. We first give a brief introduction to Bayesian theory and its implementation using Markov chain Monte Carlo (MCMC) algorithms. We then present three case studies showing how WinBUGS can be used when classical theory is difficult to implement. The first example uses data on white storks from Baden Württemberg, Germany, to demonstrate the use of mark-recapture models to estimate survival, and also how to cope with unexplained variance through random effects. Recent advances in methodology and also the WinBUGS software allow us to introduce (i) a flexible way of incorporating covariates using spline smoothing and (ii) a method to deal with missing values in covariates. The second example shows how to estimate population density while accounting for detectability, using distance sampling methods applied to a test dataset collected on a known population of wooden stakes. Finally, the third case study involves the use of state-space models of wildlife population dynamics to make inferences about density dependence in a North American duck species. Reversible Jump MCMC is used to calculate the probability of various candidate models. For all examples, data and WinBUGS code are provided.Density estimation and time trend analysis of large herbivores in Nagarhole, India
http://hdl.handle.net/10023/669
Density estimates for six large herbivore species were obtained through
analysis of line transect data from Nagarhole National Park, south-western India,
collected between 1989 and 2000. These species were Chital (Axis axis), Sambar
(Cervus unicolor), Gaur (Bos gaurus), Wild Pig (Sus scrofa), Muntjac (Muntiacus
muntjak) and Asian Elephant (Elephas maximus). Multiple Covariate Distance
Sampling (MCDS) models were used to derive these density estimates. The distance
histograms showed a relatively large spike at zero, which can lead to problems when
fitting MCDS models. The effects of this spike were investigated and remedied by
forward truncation. Density estimates from unmodified dataset were 10-15% higher
than estimates from the forward truncated data, with this going up to 37% for
Muntjac. These could possibly be over estimates. Empirical trend models were then
fit to the density estimates. Overall trends were stable, though there were intra-habitat
differences in trends for some species. The trends were similar both in cases where
forward truncation was done as well as in those where they were not.
MRes in Environmental Biology
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10023/6692005-01-01T00:00:00ZGangadharan, AdityaDensity estimates for six large herbivore species were obtained through
analysis of line transect data from Nagarhole National Park, south-western India,
collected between 1989 and 2000. These species were Chital (Axis axis), Sambar
(Cervus unicolor), Gaur (Bos gaurus), Wild Pig (Sus scrofa), Muntjac (Muntiacus
muntjak) and Asian Elephant (Elephas maximus). Multiple Covariate Distance
Sampling (MCDS) models were used to derive these density estimates. The distance
histograms showed a relatively large spike at zero, which can lead to problems when
fitting MCDS models. The effects of this spike were investigated and remedied by
forward truncation. Density estimates from unmodified dataset were 10-15% higher
than estimates from the forward truncated data, with this going up to 37% for
Muntjac. These could possibly be over estimates. Empirical trend models were then
fit to the density estimates. Overall trends were stable, though there were intra-habitat
differences in trends for some species. The trends were similar both in cases where
forward truncation was done as well as in those where they were not.Models of random wildlife movement with an application to distance sampling
http://hdl.handle.net/10023/668
In this paper we present three models of random wildlife movement: a one dimensional model of wildlife-observer encounters on roads, an analogous two dimensional model, and an further two-dimensional model that borrows from the ideas of statistical mechanics. We then derive unbiased estimates of wildlife density in terms of encounters for each of these models. By extending these results to incorporate uncertain detection, we suggest three novel distance sampling methods and briefly consider possible field applications.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10023/6682007-01-01T00:00:00ZDiTraglia, Francis J.In this paper we present three models of random wildlife movement: a one dimensional model of wildlife-observer encounters on roads, an analogous two dimensional model, and an further two-dimensional model that borrows from the ideas of statistical mechanics. We then derive unbiased estimates of wildlife density in terms of encounters for each of these models. By extending these results to incorporate uncertain detection, we suggest three novel distance sampling methods and briefly consider possible field applications.Designing a shipboard line transect survey to estimate cetacean abundance off the Azores Archipelago, Portugal
http://hdl.handle.net/10023/667
Management schemes dedicated to the conservation of wildlife populations rely on the effective monitoring of population size, and this requires the accurate and precise estimation of abundance. The accuracy and precision of estimates are determined to a large extent by the survey design. Line transect surveys are commonly applied to wildlife population assessments in which the primary purpose of a survey design is to ensure that the critical distance sampling assumptions are met.
Little information is available regarding cetacean abundance in the Archipelago of the Azores (Portugal). This study aims to design a line transect shipboard survey that allows the collection of data required to provide abundance estimates for such species. Several aspects must be taken into consideration when designing a survey to estimate cetacean abundance. This is an iterative process, and there is a constant trade off between the logistic constraints and the desired statistical robustness. Information on this process is provided to aid policy makers and environmental managers, such as the criteria used for the choices made when defining the elements of a survey design.
Three survey effort scenarios are provided to illustrate the range of possibilities between statistical robustness and logistic/ management restrictions. A survey is designed for the more economical scenario (L=5000Km), although the second scenario is the one recommended to be implemented (L=17,600Km) given it provides robust estimates of
abundance (CV<=0.2).
Revised version November 2008. MRes in Marine Mammal Science
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10023/6672008-01-01T00:00:00ZFaustino, Cláudia Estevinho SantosManagement schemes dedicated to the conservation of wildlife populations rely on the effective monitoring of population size, and this requires the accurate and precise estimation of abundance. The accuracy and precision of estimates are determined to a large extent by the survey design. Line transect surveys are commonly applied to wildlife population assessments in which the primary purpose of a survey design is to ensure that the critical distance sampling assumptions are met.
Little information is available regarding cetacean abundance in the Archipelago of the Azores (Portugal). This study aims to design a line transect shipboard survey that allows the collection of data required to provide abundance estimates for such species. Several aspects must be taken into consideration when designing a survey to estimate cetacean abundance. This is an iterative process, and there is a constant trade off between the logistic constraints and the desired statistical robustness. Information on this process is provided to aid policy makers and environmental managers, such as the criteria used for the choices made when defining the elements of a survey design.
Three survey effort scenarios are provided to illustrate the range of possibilities between statistical robustness and logistic/ management restrictions. A survey is designed for the more economical scenario (L=5000Km), although the second scenario is the one recommended to be implemented (L=17,600Km) given it provides robust estimates of
abundance (CV<=0.2).Behavioural changes of a long-ranging diver in response to oceanographic conditions
http://hdl.handle.net/10023/665
The development of an animal-borne instrument that can record oceanographic measurements (CTD-SRDL) has enabled the collection of oceanographic data at a scale relevant to the counterpart behavioural data, both in time and 3-dimensional space. This has advanced the potential for studies of the behaviour of deep-diving marine animals and the way in which they respond to their environment, yet the nature of the data delivered by CTD-SRDLs presents substantial analytical challenges and places constraints on its biological interpretation. Behavioural and environmental data, collected using CTD-SRDLs deployed on southern elephant seals (Mirounga leonina) from the South Georgia subpopulation in 2004 and 2005, are analysed for 13 females and 4 males (21,015 dives). Compressed dive profiles are used to classify individual dives into six distinct types based on their 2-dimensional time-depth characteristics using random forest classification. The relationship between dive type and environmental variables, derived from oceanographic data recorded on board the animals, is investigated in the context of regression analysis, employing a multinomial model, as well as independently fitted Generalized Linear Models (GLM) and Generalized Additive Models (GAM) for each dive type. Regression is not found to be an appropriate method for analysing abstracted behavioural dive data, and other methods are suggested. We show that functional specializations can be manifested within a dive type, using square bottom dives (SQ) as an example. The usefulness of dive classification is discussed in the context of behavioural interpretation, and validity of the ecological functions attached to each class. Preliminary analyses are important drivers of further research into improving the interpretability of abstracted behavioural data, and developing efficient, standardized methods for widespread application to this type of data, which is obtained in abundance via satellite telemetry.
BL 5019 Research project. MRes Environmental Biology
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10023/6652007-01-01T00:00:00ZPhotopoulos, TheoniThe development of an animal-borne instrument that can record oceanographic measurements (CTD-SRDL) has enabled the collection of oceanographic data at a scale relevant to the counterpart behavioural data, both in time and 3-dimensional space. This has advanced the potential for studies of the behaviour of deep-diving marine animals and the way in which they respond to their environment, yet the nature of the data delivered by CTD-SRDLs presents substantial analytical challenges and places constraints on its biological interpretation. Behavioural and environmental data, collected using CTD-SRDLs deployed on southern elephant seals (Mirounga leonina) from the South Georgia subpopulation in 2004 and 2005, are analysed for 13 females and 4 males (21,015 dives). Compressed dive profiles are used to classify individual dives into six distinct types based on their 2-dimensional time-depth characteristics using random forest classification. The relationship between dive type and environmental variables, derived from oceanographic data recorded on board the animals, is investigated in the context of regression analysis, employing a multinomial model, as well as independently fitted Generalized Linear Models (GLM) and Generalized Additive Models (GAM) for each dive type. Regression is not found to be an appropriate method for analysing abstracted behavioural dive data, and other methods are suggested. We show that functional specializations can be manifested within a dive type, using square bottom dives (SQ) as an example. The usefulness of dive classification is discussed in the context of behavioural interpretation, and validity of the ecological functions attached to each class. Preliminary analyses are important drivers of further research into improving the interpretability of abstracted behavioural data, and developing efficient, standardized methods for widespread application to this type of data, which is obtained in abundance via satellite telemetry.Using generalized estimating equations with regression splines to improve analysis of butterfly transect data
http://hdl.handle.net/10023/488
Surveying animal populations is an important aspect of wildlife
management. Distinguishing trend from random fluctuations and
quantifying trend are key goals in any analysis.
The aim of this thesis is to review analyses of Butterfly Monitoring
Survey (BMS) data and to develop new methods which address some
flaws in previous studies. The BMS was established in 1976 at Monks
Wood, Cambridgeshire and sites were added over time throughout
Britain in order to monitor butterfly population trends. Weekly
counts are made over the monitoring season and the main aims are to
produce annual indices and compare these indices over time for any
particular species.
Originally, weekly counts were summed to produce relative indices
and missing counts were estimated using linear interpolation. This
thesis discusses the weaknesses of this basic method
and suggests possible improvements.
In recent years, with advancements in statistical methods and
increased computer power, new methods can be applied to accommodate
the longitudinal and flexible nature of ecological data.
Mixed Models, Generalized Estimating Equations and Generalized
Additive Models are used and the relative merits of each modelling
approach discussed. These methods allow for correlation and
non-linearity in data.
Model selection is an important consideration when modelling and
different tests are introduced and compared.
Once a model is selected, site-level indices are estimated, which
can be collated to produce regional and national indices. Different
methods of estimating precision around indices are also contrasted.
Bootstrapping is found to be a convenient and dependable approach.
Abundance is difficult to disentangle from detectability when only
counts of species are carried out. Methods for dealing with this
problem are suggested.
Once reliable annual abundance estimates are found, they can be
compared over time using a variety of statistical techniques. The
chain-ratio method is applied to a subset of real data.
Sun, 01 Jun 2008 00:00:00 GMThttp://hdl.handle.net/10023/4882008-06-01T00:00:00ZBrewer, CiaraSurveying animal populations is an important aspect of wildlife
management. Distinguishing trend from random fluctuations and
quantifying trend are key goals in any analysis.
The aim of this thesis is to review analyses of Butterfly Monitoring
Survey (BMS) data and to develop new methods which address some
flaws in previous studies. The BMS was established in 1976 at Monks
Wood, Cambridgeshire and sites were added over time throughout
Britain in order to monitor butterfly population trends. Weekly
counts are made over the monitoring season and the main aims are to
produce annual indices and compare these indices over time for any
particular species.
Originally, weekly counts were summed to produce relative indices
and missing counts were estimated using linear interpolation. This
thesis discusses the weaknesses of this basic method
and suggests possible improvements.
In recent years, with advancements in statistical methods and
increased computer power, new methods can be applied to accommodate
the longitudinal and flexible nature of ecological data.
Mixed Models, Generalized Estimating Equations and Generalized
Additive Models are used and the relative merits of each modelling
approach discussed. These methods allow for correlation and
non-linearity in data.
Model selection is an important consideration when modelling and
different tests are introduced and compared.
Once a model is selected, site-level indices are estimated, which
can be collated to produce regional and national indices. Different
methods of estimating precision around indices are also contrasted.
Bootstrapping is found to be a convenient and dependable approach.
Abundance is difficult to disentangle from detectability when only
counts of species are carried out. Methods for dealing with this
problem are suggested.
Once reliable annual abundance estimates are found, they can be
compared over time using a variety of statistical techniques. The
chain-ratio method is applied to a subset of real data.Incorporating measurement error and density gradients in distance sampling surveys
http://hdl.handle.net/10023/391
Distance sampling is one of the most commonly used methods for estimating density
and abundance. Conventional methods are based on the distances of detected animals
from the center of point transects or the center line of line transects. These distances
are used to model a detection function: the probability of detecting an animal, given
its distance from the line or point. The probability of detecting an animal in the
covered area is given by the mean value of the detection function with respect to
the available distances to be detected. Given this probability, a Horvitz-Thompson-
like estimator of abundance for the covered area follows, hence using a model-based
framework. Inferences for the wider survey region are justified using the survey design.
Conventional distance sampling methods are based on a set of assumptions. In
this thesis I present results that extend distance sampling on two fronts.
Firstly, estimators are derived for situations in which there is measurement error in
the distances. These estimators use information about the measurement error in two
ways: (1) a biased estimator based on the contaminated distances is multiplied by an
appropriate correction factor, which is a function of the errors (PDF approach), and
(2) cast into a likelihood framework that allows parameter estimation in the presence
of measurement error (likelihood approach).
Secondly, methods are developed that relax the conventional assumption that the
distribution of animals is independent of distance from the lines or points (usually
guaranteed by appropriate survey design). In particular, the new methods deal with
the case where animal density gradients are caused by the use of non-random sampler
allocation, for example transects placed along linear features such as roads or streams.
This is dealt with separately for line and point transects, and at a later stage an
approach for combining the two is presented.
A considerable number of simulations and example analysis illustrate the performance of the proposed methods.
Thu, 01 Nov 2007 00:00:00 GMThttp://hdl.handle.net/10023/3912007-11-01T00:00:00ZMarques, Tiago Andre Lamas OliveiraDistance sampling is one of the most commonly used methods for estimating density
and abundance. Conventional methods are based on the distances of detected animals
from the center of point transects or the center line of line transects. These distances
are used to model a detection function: the probability of detecting an animal, given
its distance from the line or point. The probability of detecting an animal in the
covered area is given by the mean value of the detection function with respect to
the available distances to be detected. Given this probability, a Horvitz-Thompson-
like estimator of abundance for the covered area follows, hence using a model-based
framework. Inferences for the wider survey region are justified using the survey design.
Conventional distance sampling methods are based on a set of assumptions. In
this thesis I present results that extend distance sampling on two fronts.
Firstly, estimators are derived for situations in which there is measurement error in
the distances. These estimators use information about the measurement error in two
ways: (1) a biased estimator based on the contaminated distances is multiplied by an
appropriate correction factor, which is a function of the errors (PDF approach), and
(2) cast into a likelihood framework that allows parameter estimation in the presence
of measurement error (likelihood approach).
Secondly, methods are developed that relax the conventional assumption that the
distribution of animals is independent of distance from the lines or points (usually
guaranteed by appropriate survey design). In particular, the new methods deal with
the case where animal density gradients are caused by the use of non-random sampler
allocation, for example transects placed along linear features such as roads or streams.
This is dealt with separately for line and point transects, and at a later stage an
approach for combining the two is presented.
A considerable number of simulations and example analysis illustrate the performance of the proposed methods.A Bayesian approach to modelling field data on multi-species predator prey-interactions
http://hdl.handle.net/10023/174
Multi-species functional response models are required to model the predation of generalist preda-
tors, which consume more than one prey species. In chapter 2, a new model for the multi-species
functional response is presented. This model can describe generalist predators that exhibit func-
tional responses of Holling type II to some of their prey and of type III to other prey. In chapter
3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and
show how Bayesian statistics are particularly well-suited for the fitting of functional response
models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi-
species functional response model is fitted to field data on two generalist predators: the hen
harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous
Bayesian model of the multi-species functional response that has been fitted to field data.
The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities
of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to
the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The
impact of using Bayesian or frequentist models on the resulting functional response is discussed.
In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible
reasons are discussed, including poor data quality or a lack of relevance of the available data for
informing a behavioural functional response model.
I conclude with a comparison of the role that functional responses play in behavioural, population
and community ecology and emphasise the need for further research into unifying these different
approaches to understanding predation with particular reference to predator movement.
In an appendix, I evaluate the possibility of using a functional response for inferring the abun-
dances of prey species from performance indicators of generalist predators feeding on these prey.
I argue that this approach may be futile in general, because a generalist predator's energy intake
does not depend on the density of any single of its prey, so that the possibly unknown densities
of all prey need to be taken into account.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10023/1742006-01-01T00:00:00ZAsseburg, ChristianMulti-species functional response models are required to model the predation of generalist preda-
tors, which consume more than one prey species. In chapter 2, a new model for the multi-species
functional response is presented. This model can describe generalist predators that exhibit func-
tional responses of Holling type II to some of their prey and of type III to other prey. In chapter
3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and
show how Bayesian statistics are particularly well-suited for the fitting of functional response
models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi-
species functional response model is fitted to field data on two generalist predators: the hen
harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous
Bayesian model of the multi-species functional response that has been fitted to field data.
The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities
of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to
the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The
impact of using Bayesian or frequentist models on the resulting functional response is discussed.
In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible
reasons are discussed, including poor data quality or a lack of relevance of the available data for
informing a behavioural functional response model.
I conclude with a comparison of the role that functional responses play in behavioural, population
and community ecology and emphasise the need for further research into unifying these different
approaches to understanding predation with particular reference to predator movement.
In an appendix, I evaluate the possibility of using a functional response for inferring the abun-
dances of prey species from performance indicators of generalist predators feeding on these prey.
I argue that this approach may be futile in general, because a generalist predator's energy intake
does not depend on the density of any single of its prey, so that the possibly unknown densities
of all prey need to be taken into account.Reconstruction of foliations from directional information
http://hdl.handle.net/10023/158
In many areas of science, especially geophysics, geography and
meteorology, the data are often directions or axes rather than
scalars or unrestricted vectors. Directional statistics considers
data which are mainly unit vectors lying in two- or
three-dimensional space (R² or R³). One
way in which directional data arise is as normals to foliations. A
(codimension-1) foliation of {R}^{d} is a system
of non-intersecting (d-1)-dimensional surfaces filling out the
whole of {R}^{d}. At each point z of {R}^{d}, any given codimension-1 foliation determines a
unit vector v normal to the surface through z.
The problem considered here is that of reconstructing the foliation
from observations ({z}{i}, {v}{i}), i=1,...,n. One
way of doing this is rather similar to fitting smooth splines to
data. That is, the reconstructed foliation has to be as close to the
data as possible, while the foliation itself is not too rough. A
tradeoff parameter is introduced to control the balance between
smoothness and
closeness. The approach used in this thesis is to take the surfaces to be
surfaces of constant values of a suitable real-valued function h
on {R}^{d}. The problem of reconstructing a foliation is
translated into the language of Schwartz distributions and a deep
result in the theory of distributions is used to give the
appropriate general form of the fitted function h. The model
parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals
are developed and estimates of concentration are given.
Fri, 01 Jun 2007 00:00:00 GMThttp://hdl.handle.net/10023/1582007-06-01T00:00:00ZYeh, Shu-YingIn many areas of science, especially geophysics, geography and
meteorology, the data are often directions or axes rather than
scalars or unrestricted vectors. Directional statistics considers
data which are mainly unit vectors lying in two- or
three-dimensional space (R² or R³). One
way in which directional data arise is as normals to foliations. A
(codimension-1) foliation of {R}^{d} is a system
of non-intersecting (d-1)-dimensional surfaces filling out the
whole of {R}^{d}. At each point z of {R}^{d}, any given codimension-1 foliation determines a
unit vector v normal to the surface through z.
The problem considered here is that of reconstructing the foliation
from observations ({z}{i}, {v}{i}), i=1,...,n. One
way of doing this is rather similar to fitting smooth splines to
data. That is, the reconstructed foliation has to be as close to the
data as possible, while the foliation itself is not too rough. A
tradeoff parameter is introduced to control the balance between
smoothness and
closeness. The approach used in this thesis is to take the surfaces to be
surfaces of constant values of a suitable real-valued function h
on {R}^{d}. The problem of reconstructing a foliation is
translated into the language of Schwartz distributions and a deep
result in the theory of distributions is used to give the
appropriate general form of the fitted function h. The model
parameters are estimated by a simplified Newton method. Under appropriate distributional assumptions on v{1},...,v{n}, confidence regions for the true normals
are developed and estimates of concentration are given.