Show simple item record

Files in this item


Item metadata

dc.contributor.advisorButuruga, Dumitru
dc.contributor.advisorGiumale, Cristian
dc.contributor.advisorDan Serbanati, Luca
dc.contributor.authorDraghici, Sorin
dc.coverage.spatial358 p.en_US
dc.description.abstractNeural networks can be analysed from two points of view: training and generalisation. The training is characterised by a trade-off between the 'goodness' of the training algorithm itself (speed, reliability, guaranteed convergence) and the 'goodness' of the architecture (the difficulty of the problems the network can potentially solve). Good training algorithms are available for simple architectures which cannot solve complicated problems. More complex architectures, which have been shown to be able to solve potentially any problem do not have in general simple and fast algorithms with guaranteed convergence and high reliability. A good training technique should be simple, fast and reliable, and yet also be applicable to produce a network able to solve complicated problems. The thesis presents Constraint Based Decomposition (CBD) as a technique which satisfies the above requirements well. CBD is shown to build a network able to solve complicated problems in a simple, fast and reliable manner. Furthermore, the user is given a better control over the generalisation properties of the trained network with respect to the control offered by other techniques. The generalisation issue is addressed, as well. An analysis of the meaning of the term "good generalisation" is presented and a framework for assessing generalisation is given: the generalisation can be assessed only with respect to a known or desired underlying function. The known properties of the underlying function can be embedded into the network thus ensuring a better generalisation for the given problem. This is the fundamental idea of the complex backpropagation network. This network can associate signals through associating some of their parameters using complex weights. It is shown that such a network can yield better generalisation results than a standard backpropagation network associating instantaneous values.en_US
dc.publisherUniversity of St Andrews
dc.subject.lcshNeural networks (Computer science).en
dc.titleUsing constraints to improve generalisation and training of feedforward neural networks : constraint based decomposition and complex backpropagationen_US
dc.contributor.sponsorCommittee of Vice-Chancellors and Principals of the Universities of the United Kingdomen_US
dc.contributor.sponsorUniversity of St Andrewsen_US
dc.type.qualificationnamePhD Doctor of Philosophyen_US
dc.publisher.institutionThe University of St Andrewsen_US

This item appears in the following Collection(s)

Show simple item record