Files in this item
Bayesian and information-theoretic tools for neuroscience
Item metadata
dc.contributor.advisor | Foldiak, Peter | |
dc.contributor.author | Endres, Dominik M. | |
dc.coverage.spatial | 200 | en |
dc.date.accessioned | 2007-02-09T11:28:26Z | |
dc.date.available | 2007-02-09T11:28:26Z | |
dc.date.issued | 2006-11-30 | |
dc.identifier.uri | https://hdl.handle.net/10023/162 | |
dc.description.abstract | The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neural code. Secondly, tools from information theory are used to measure the information contained in a biological neural code. Chapter 3: The REC model proposed by Harpur and Prager codes inputs into a sparse, factorial representation, maintaining reconstruction accuracy. Here I propose a modification of the REC model to determine the optimal network dimensionality. The resulting code for unfiltered natural images is accurate, highly sparse and a large fraction of the code elements show localized features. Furthermore, I propose an activation algorithm for the network that is faster and more accurate than a gradient descent based activation method. Moreover, it is demonstrated that asymmetric noise promotes sparseness. Chapter 4: A fast, exact alternative to Bayesian classification is introduced. Computational time is quadratic in both the number of observed data points and the number of degrees of freedom of the underlying model. As an example application, responses of single neurons from high-level visual cortex (area STSa) to rapid sequences of complex visual stimuli are analyzed. Chapter 5: I present an exact Bayesian treatment of a simple, yet sufficiently general probability distribution model. The model complexity, exact values of the expectations of entropies and their variances can be computed with polynomial effort given the data. The expectation of the mutual information becomes thus available, too, and a strict upper bound on its variance. The resulting algorithm is first tested on artificial data. To that end, an information theoretic similarity measure is derived. Second, the algorithm is demonstrated to be useful in neuroscience by studying the information content of the neural responses analyzed in the previous chapter. It is shown that the information throughput of STS neurons is maximized for stimulus durations of approx. 60ms. | en |
dc.format.extent | 1337112 bytes | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | en |
dc.publisher | University of St Andrews | |
dc.rights | Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Generic | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/2.5/ | |
dc.subject | Information theory | en |
dc.subject | Bayesian methods | en |
dc.subject | Computational neuroscience | en |
dc.subject.lcc | QP356.E6 | |
dc.subject.lcsh | Neurosciences | en |
dc.subject.lcsh | Information theory | en |
dc.subject.lcsh | Bayesian statistical decision theory | en |
dc.subject.lcsh | Vision--Mathematical models | en |
dc.title | Bayesian and information-theoretic tools for neuroscience | en |
dc.type | Thesis | en |
dc.contributor.sponsor | ITAS-SYS Gbr | en |
dc.type.qualificationlevel | Doctoral | en |
dc.type.qualificationname | PhD Doctor of Philosophy | en |
dc.publisher.institution | The University of St Andrews | en |
This item appears in the following Collection(s)
Except where otherwise noted within the work, this item's licence for re-use is described as Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Generic
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.