Quasi-Newton methods for unconstrained function minimization and the solution of systems of nonlinear equations
MetadataShow full item record
This thesis is concerned with the unconstrained minimization of a function of n variables, and, to a lesser extent, with the numerical solution of systems of nonlinear equations. The first chapter contains an account of the fundamental ideas and theorems which are related to the subject of this thesis, and also gives a brief description of some methods which historically precede quasi-Newton methods, such as the method of steepest descent, Newton's method, the conjugate direction methods, the contraction mapping method, and the parameter variation method. Newton's method, among the aforementioned methods, is considered the most effective one. It is rapidly convergent, and is capable of handling a variety of problems efficiently. But from a computational point of view, Newton's method is expensive. The second chapter of this thesis demonstrates how quasi-Newton methods are considered as an improvement of Newton's method by being able to circumvent the difficulties which face Newton's method. Also a general procedure for deriving quasi-Newton algorithms is described. All methods generate a sequence of estimates which tend to the solution of the problem. In general all the methods which precede quasi-Newton methods employ information at the present stage, but quasi-Newton methods employ information at the present stage, and at the stage immediately previous to the present. In chapters 3 and 4 we will discuss methods which employ information from previous stages. Such methods are unified in one general scheme called "supermemory descent methods". Numerical experience with members of this class of methods is reported and compared with quasi-Newton methods.
Thesis, MSc Master of Science
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.