Economics & Finance Research
http://hdl.handle.net/10023/65
2022-01-26T18:17:51ZEducation, income and happiness : panel evidence for the UK
http://hdl.handle.net/10023/16581
Using panel data from the BHPS and its Understanding Society extension, we study life satisfaction (LS) and income over nearly two decades, for samples split by education, and age, to our knowledge for the first time. The highly educated went from lowest to highest LS, though their average income was always higher. In spite of rapid income growth up to 2008/2009, the less educated showed no rise in LS, while highly educated LS rose after the crash despite declining real income. In panel LS regressions with individual fixed effects, none of the income variables was significant for the highly educated.
2018-11-14T00:00:00ZFitzRoy, Felix R.Nolan, Michael A.Using panel data from the BHPS and its Understanding Society extension, we study life satisfaction (LS) and income over nearly two decades, for samples split by education, and age, to our knowledge for the first time. The highly educated went from lowest to highest LS, though their average income was always higher. In spite of rapid income growth up to 2008/2009, the less educated showed no rise in LS, while highly educated LS rose after the crash despite declining real income. In panel LS regressions with individual fixed effects, none of the income variables was significant for the highly educated.Afriat's Theorem and Samuelson's 'Eternal Darkness'
http://hdl.handle.net/10023/12274
Suppose that we have access to a finite set of expenditure data drawn from an individual consumer, i.e., how much of each good has been purchased and at what prices. Afriat (1967) was the first to establish necessary and sufficient conditions on such a data set for rationalizability by utility maximization. In this note, we provide a new and simple proof of Afriat’s Theorem, the explicit steps of which help to more deeply understand the driving force behind one of the more curious features of the result itself, namely that a concave rationalization is without loss of generality in a classical finite data setting. Our proof stresses the importance of the non-uniqueness of a utility representation along with the finiteness of the data set in ensuring the existence of a concave utility function that rationalizes the data.
2016-08-01T00:00:00ZPolisson, MatthewRenou, LudovicSuppose that we have access to a finite set of expenditure data drawn from an individual consumer, i.e., how much of each good has been purchased and at what prices. Afriat (1967) was the first to establish necessary and sufficient conditions on such a data set for rationalizability by utility maximization. In this note, we provide a new and simple proof of Afriat’s Theorem, the explicit steps of which help to more deeply understand the driving force behind one of the more curious features of the result itself, namely that a concave rationalization is without loss of generality in a classical finite data setting. Our proof stresses the importance of the non-uniqueness of a utility representation along with the finiteness of the data set in ensuring the existence of a concave utility function that rationalizes the data.Higher tax for top earners
http://hdl.handle.net/10023/10361
The literature can justify increasing and decreasing marginal taxes (IMT & DMT) on top income under different social objectives and income distributions. Even if DMT are optimal, they are often politically infeasible. Then a flat tax seems to be a constrained optimal solution. We show however that, if we want to maximize the utility of a poor majority any flat tax can be inferior to some IMT. We provide a sufficient condition for (two-band) IMT to dominate any flat tax and further generalize this result to allow different welfare weights, declining elasticity of labour supply and more tax bands.
2017-02-01T00:00:00ZFitzRoy, FelixJin, JimThe literature can justify increasing and decreasing marginal taxes (IMT & DMT) on top income under different social objectives and income distributions. Even if DMT are optimal, they are often politically infeasible. Then a flat tax seems to be a constrained optimal solution. We show however that, if we want to maximize the utility of a poor majority any flat tax can be inferior to some IMT. We provide a sufficient condition for (two-band) IMT to dominate any flat tax and further generalize this result to allow different welfare weights, declining elasticity of labour supply and more tax bands.Examining monetary policy transmission in the People's Republic of China – structural change models with a Monetary Policy Index
http://hdl.handle.net/10023/8576
This paper estimates augmented versions of the Investment–Saving curve for the People's Republic of China in an attempt to examine the relationship between monetary policy and the real economy. It endeavors to account for any structural break, nonlinearity, or asymmetry in the transmission process by estimating a breakpoint model and a Markov switching model. The Investment–Saving curve equations are estimated using a Monetary Policy Index, which has been calculated using the Kalman filter. This index will account for the various monetary policy tools, both quantitative and qualitative, that the People's Bank of China has used over the period 1991–2014. The results of this paper suggest that monetary policy has an asymmetric affect depending on the level of output in relation to potential, and that the People's Republic of China's exchange rate policy has restricted the effectiveness of the People's Bank of China's monetary policy response.
The financial support of the Irish Research Council and The Paul Tansey Economics Postgraduate Research Scholarship is greatly appreciated.
2016-03-01T00:00:00ZEgan, Paul GerardLeddin, Anthony J.This paper estimates augmented versions of the Investment–Saving curve for the People's Republic of China in an attempt to examine the relationship between monetary policy and the real economy. It endeavors to account for any structural break, nonlinearity, or asymmetry in the transmission process by estimating a breakpoint model and a Markov switching model. The Investment–Saving curve equations are estimated using a Monetary Policy Index, which has been calculated using the Kalman filter. This index will account for the various monetary policy tools, both quantitative and qualitative, that the People's Bank of China has used over the period 1991–2014. The results of this paper suggest that monetary policy has an asymmetric affect depending on the level of output in relation to potential, and that the People's Republic of China's exchange rate policy has restricted the effectiveness of the People's Bank of China's monetary policy response.A characterization of risk-neutral and ambiguity-averse behavior
http://hdl.handle.net/10023/7992
This paper studies a decision maker who chooses monetary bets/investment portfolios under pure uncertainty. Necessary and sufficient conditions on his preferences over these objects are provided for his choice behavior to be guided by the maxmin expected value rule, and therefore to exhibit both "risk neutrality" and ambiguity aversion. This result is obtained as an extension of a simple re-characterization of de Finetti's theorem on maximization of subjective expected value.
2015-12-09T00:00:00ZGerasimou, GeorgiosThis paper studies a decision maker who chooses monetary bets/investment portfolios under pure uncertainty. Necessary and sufficient conditions on his preferences over these objects are provided for his choice behavior to be guided by the maxmin expected value rule, and therefore to exhibit both "risk neutrality" and ambiguity aversion. This result is obtained as an extension of a simple re-characterization of de Finetti's theorem on maximization of subjective expected value.Choice, deferral and consistency
http://hdl.handle.net/10023/6754
In this paper we study decision making in situations where the individual's preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.
Gerasimou and Costa-Gomes gratefully acknowledge financial support from the British Academy (Grant SG122338)
2014-12-26T00:00:00ZCosta-Gomes, MiguelCueva, CarlosGerasimou, GeorgiosIn this paper we study decision making in situations where the individual's preferences are not assumed to be complete. First, we identify conditions that are necessary and sufficient for choice behavior in general domains to be consistent with maximization of a possibly incomplete preference relation. In this model of maximally dominant choice, the agent defers/avoids choosing at those and only those menus where a most preferred option does not exist. This allows for simple explanations of conflict-induced deferral and choice overload. It also suggests a criterion for distinguishing between indifference and incomparability based on observable data. A simple extension of this model also incorporates decision costs and provides a theoretical framework that is compatible with the experimental design that we propose to elicit possibly incomplete preferences in the lab. The design builds on the introduction of monetary costs that induce choice of a most preferred feasible option if one exists and deferral otherwise. Based on this design we found evidence suggesting that a quarter of the subjects in our study had incomplete preferences, and that these made significantly more consistent choices than a group of subjects who were forced to choose. The latter effect, however, is mitigated once data on indifferences are accounted for.Dominance solvable games with multiple payoff criteria
http://hdl.handle.net/10023/5102
Two logically distinct and permissive extensions of iterative weak dominance are introduced for games with possibly vector-valued payoffs. The first, iterative partial dominance, builds on an easy-to-check condition but may lead to solutions that do not include any (generalized) Nash equilibria. However, the second and intuitively more demanding extension, iterative essential dominance, is shown to be an equilibrium refinement. The latter result includes Moulin's (1979) classic theorem as a special case when all players' payoffs are real-valued. Therefore, essential dominance solvability can be a useful solution concept for making sharper predictions in multicriteria games that feature a plethora of equilibria.
2014-07-25T00:00:00ZGerasimou, GeorgiosTwo logically distinct and permissive extensions of iterative weak dominance are introduced for games with possibly vector-valued payoffs. The first, iterative partial dominance, builds on an easy-to-check condition but may lead to solutions that do not include any (generalized) Nash equilibria. However, the second and intuitively more demanding extension, iterative essential dominance, is shown to be an equilibrium refinement. The latter result includes Moulin's (1979) classic theorem as a special case when all players' payoffs are real-valued. Therefore, essential dominance solvability can be a useful solution concept for making sharper predictions in multicriteria games that feature a plethora of equilibria.A behavioural model of choice in the presence of decision conflict
http://hdl.handle.net/10023/4532
This paper proposes a model of choice that does not assume completeness of the decision maker’s preferences. The model explains in a natural way, and within a unified framework choice when preference-incomparable options are present, four behavioural phenomena: the attraction effect, choice deferral, the strengthening of the attraction e.ect when deferral is permissible, and status quo bias. The key element in the proposed decision rule is that an individual chooses an alternative from a menu if it is worse than no other alternative in that menu and is also better than at least one. Utility-maximising behaviour is included as a special case when preferences are complete. The relevance of the partial dominance idea underlying the proposed choice procedure is illustrated with an intuitive generalisation of weakly dominated strategies and their iterated deletion in games with vector payoffs.
2013-05-01T00:00:00ZGerasimou, GeorgiosThis paper proposes a model of choice that does not assume completeness of the decision maker’s preferences. The model explains in a natural way, and within a unified framework choice when preference-incomparable options are present, four behavioural phenomena: the attraction effect, choice deferral, the strengthening of the attraction e.ect when deferral is permissible, and status quo bias. The key element in the proposed decision rule is that an individual chooses an alternative from a menu if it is worse than no other alternative in that menu and is also better than at least one. Utility-maximising behaviour is included as a special case when preferences are complete. The relevance of the partial dominance idea underlying the proposed choice procedure is illustrated with an intuitive generalisation of weakly dominated strategies and their iterated deletion in games with vector payoffs.Multi-task research and research joint ventures
http://hdl.handle.net/10023/3497
The paper shows that, whenever the completion of a research project requires the overcoming of more than one research obstacle, then Research Joint Ventures enjoy an intrinsic advantage relative to independent firms. This advantage, which has hitherto escaped attention in the RJV literature, relates to the RJV’s ability to organize research more efficiently than independent firms. The fact that RJVs can be both more profitable and yield higher expected net welfare than independent firms is surprising because it is derived from a model in which RJVs do not optimize over R&D investment. The paper exploits a basic result in systems reliability theory to establish the organizational superiority of RJVs.
2013-04-01T00:00:00ZLa Manna, Manfredi M AThe paper shows that, whenever the completion of a research project requires the overcoming of more than one research obstacle, then Research Joint Ventures enjoy an intrinsic advantage relative to independent firms. This advantage, which has hitherto escaped attention in the RJV literature, relates to the RJV’s ability to organize research more efficiently than independent firms. The fact that RJVs can be both more profitable and yield higher expected net welfare than independent firms is surprising because it is derived from a model in which RJVs do not optimize over R&D investment. The paper exploits a basic result in systems reliability theory to establish the organizational superiority of RJVs.Sequential action and beliefs under partially observable DSGE environments
http://hdl.handle.net/10023/2599
This paper introduces a classification of DSGEs from a Markovian perspective, and positions the class of POMDP (Partially Observable Markov Decision Process) to the center of a generalization of linear rational expectations models. The analysis of the POMDP class builds on the previous development in dynamic controls for linear system, and derives a solution algorithm by formulating an equilibrium as a fixed point of an operator that maps what we observe into what we believe.
2012-01-01T00:00:00ZKim, Seong-HoonThis paper introduces a classification of DSGEs from a Markovian perspective, and positions the class of POMDP (Partially Observable Markov Decision Process) to the center of a generalization of linear rational expectations models. The analysis of the POMDP class builds on the previous development in dynamic controls for linear system, and derives a solution algorithm by formulating an equilibrium as a fixed point of an operator that maps what we observe into what we believe.Endogenous Price Flexibility and Optimal Monetary Policy
http://hdl.handle.net/10023/905
Much of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This paper extends the standard New Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenising the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilisation, even when shocks take the form of cost-push distur¬bances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility in order to stabilise the welfare-relevant output gap.
2010-01-01T00:00:00ZSutherland, AlanSenay, OzgeMuch of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This paper extends the standard New Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenising the degree of price flexibility tends to shift optimal monetary policy towards complete inflation stabilisation, even when shocks take the form of cost-push distur¬bances. This contrasts with the standard result obtained in models with exogenous price flexibility, which show that optimal monetary policy should allow some degree of inflation volatility in order to stabilise the welfare-relevant output gap.Welfare, growth and environment: a sceptical review of the skeptical environmentalist (Bjørn Lomborg, Cambridge University Press, 2001)
http://hdl.handle.net/10023/659
In his wide ranging attempt to review the literature on economic development and welfare in
relation to the environment, Lomborg claims balance and objectivity, but actually presents a
thoroughly misleading picture of environmental prospects and research, global economic
development, and the real determinants of human welfare. Statistician Lomborg blatantly
distorts the evidence by systematically selecting statistics to support his claims that global
welfare is generally improving and environmental policy is unnecessary, while denying
catastrophic risks such as prolonged drought in major food growing areas (though such
events cannot be ruled out by climate models). In spite of its numerous errors and biases,
"the Lomborg scam" (as leading biologist E.O.Wilson aptly calls it) has been welcomed by
gullible or like-minded journalists and politicians.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000052/; March 2002. Forthcoming as a review article in the Scottish Journal of Political Economy
2002-01-01T00:00:00ZFitzRoy, FelixSmith, IanIn his wide ranging attempt to review the literature on economic development and welfare in
relation to the environment, Lomborg claims balance and objectivity, but actually presents a
thoroughly misleading picture of environmental prospects and research, global economic
development, and the real determinants of human welfare. Statistician Lomborg blatantly
distorts the evidence by systematically selecting statistics to support his claims that global
welfare is generally improving and environmental policy is unnecessary, while denying
catastrophic risks such as prolonged drought in major food growing areas (though such
events cannot be ruled out by climate models). In spite of its numerous errors and biases,
"the Lomborg scam" (as leading biologist E.O.Wilson aptly calls it) has been welcomed by
gullible or like-minded journalists and politicians.Universities and fundamental research: reflections on the growth of university-industry partnership
http://hdl.handle.net/10023/658
The recent rise in university-industry partnerships has stimulated an
important public policy debate regarding how these relationships affect
fundamental research. In this paper, we examine the antecedents and
consequences of policies to promote university-industry alliances. Although the
preliminary evidence appears to suggest that these partnerships have not had a
deleterious effect on the quantity and quality of basic research, some legitimate
concerns have been raised about these activities that require additional analysis.
We conclude that additional research is needed to provide a more accurate
assessment of the optimal level of commercialisation.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000053/; [Originally] November 2001. This version January 2002. Forthcoming in Oxford review of economic policy
2002-01-01T00:00:00ZPoyago-Theotoky, JoannaBeath, JohnSiegel, Donald S.The recent rise in university-industry partnerships has stimulated an
important public policy debate regarding how these relationships affect
fundamental research. In this paper, we examine the antecedents and
consequences of policies to promote university-industry alliances. Although the
preliminary evidence appears to suggest that these partnerships have not had a
deleterious effect on the quantity and quality of basic research, some legitimate
concerns have been raised about these activities that require additional analysis.
We conclude that additional research is needed to provide a more accurate
assessment of the optimal level of commercialisation.The cost of political intervention in monetary policy
http://hdl.handle.net/10023/657
Data from a unique monetary ‘experiment’ conducted in the UK during the period
1994-97 are used to investigate the cost of political intervention in monetary policy.
The paper finds that the difference between government bond yields in Germany (but
not the US) and the UK was systematically related to an index of the credibility of
monetary policy constructed on the basis of the frequency of agreements/
disagreements between the Minister of Finance who took the decisions on interest
rates and the Bank of England, whose recommendations were published with a lag,
with disagreements causing an increase in the yield differential.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000055/; Revised November 2001
2001-01-01T00:00:00ZCobham, DavidPapadopoulos, AthanasiosZis, GeorgeData from a unique monetary ‘experiment’ conducted in the UK during the period
1994-97 are used to investigate the cost of political intervention in monetary policy.
The paper finds that the difference between government bond yields in Germany (but
not the US) and the UK was systematically related to an index of the credibility of
monetary policy constructed on the basis of the frequency of agreements/
disagreements between the Minister of Finance who took the decisions on interest
rates and the Bank of England, whose recommendations were published with a lag,
with disagreements causing an increase in the yield differential.Taxation, unemployment and working time in models of economic growth
http://hdl.handle.net/10023/656
This paper combines collective bargaining over wages and working time with models of
endogenous and neoclassical growth. Public expenditure is funded by taxes on capital and labour
supplied by infinitely-lived households in a closed economy. Taxes on labour are generally
inefficient in both growth models, there is a “dynamic Laffer Curve”, and employment is increased
by a reduction of working hours below the collective bargaining level – except in the case of a
monopoly union. Although growth is maximised by competitive (efficient) hours, welfare-optimal
working time is below the collective bargain when union are ‘too weak’, and vice-versa.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000056/; Revised August 2001
2001-08-01T00:00:00ZFitzRoy, FelixFunke, MichaelNolan, Michael A.This paper combines collective bargaining over wages and working time with models of
endogenous and neoclassical growth. Public expenditure is funded by taxes on capital and labour
supplied by infinitely-lived households in a closed economy. Taxes on labour are generally
inefficient in both growth models, there is a “dynamic Laffer Curve”, and employment is increased
by a reduction of working hours below the collective bargaining level – except in the case of a
monopoly union. Although growth is maximised by competitive (efficient) hours, welfare-optimal
working time is below the collective bargain when union are ‘too weak’, and vice-versa.Heterogeneous beliefs and instability
http://hdl.handle.net/10023/655
While Rational Expectations have dominated the paradigm of expectations formation,
they have been more recently challenged on the empirical ground such as, for
instance, in the dynamics of the exchange rate. This challenge has led to the
introduction of heterogeneous expectations in economic modeling. More specifically,
the forecasts of the market participants have been drawn from competing views. Two
behaviours are usually considered: agents are either fundamentalist or chartist.
Moreover, the possibility of switching from one behaviour to the other one is also
assumed.
In a simple cobweb model, we study the dynamics associated with different
endogenous switching process based on the path of prices. We provide an example
with an asymmetric endogenous switching process built on the dynamics of past
prices. This example confirms the widespread belief that fundamentalist market
behaviour as compared with that of chartist tends to promote market stability.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000057/
2001-01-01T00:00:00ZLasselle, LaurenceSvizzero, SergeTisdell, ClemWhile Rational Expectations have dominated the paradigm of expectations formation,
they have been more recently challenged on the empirical ground such as, for
instance, in the dynamics of the exchange rate. This challenge has led to the
introduction of heterogeneous expectations in economic modeling. More specifically,
the forecasts of the market participants have been drawn from competing views. Two
behaviours are usually considered: agents are either fundamentalist or chartist.
Moreover, the possibility of switching from one behaviour to the other one is also
assumed.
In a simple cobweb model, we study the dynamics associated with different
endogenous switching process based on the path of prices. We provide an example
with an asymmetric endogenous switching process built on the dynamics of past
prices. This example confirms the widespread belief that fundamentalist market
behaviour as compared with that of chartist tends to promote market stability.Renormalization method and its economic applications
http://hdl.handle.net/10023/654
The purpose of this paper is to give new insights of the method of Helleman (1980) in the
context of macrodynamics. This method explains how a difference equation can be
locally studied from the Feigenbaum equation in the case of a constant Jacobian matrix.
First we introduce this technique. Second we apply it in two models: the model of
Matsuyama (1999) and the model of Kaldor (1957). Finally we present an extension of
the technique in the case of non constant (linear) Jacobian matrix and apply this extension
in the model of Médio (1992).
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000059/
2001-01-01T00:00:00ZBriec, WalterLasselle, LaurenceThe purpose of this paper is to give new insights of the method of Helleman (1980) in the
context of macrodynamics. This method explains how a difference equation can be
locally studied from the Feigenbaum equation in the case of a constant Jacobian matrix.
First we introduce this technique. Second we apply it in two models: the model of
Matsuyama (1999) and the model of Kaldor (1957). Finally we present an extension of
the technique in the case of non constant (linear) Jacobian matrix and apply this extension
in the model of Médio (1992).Growing through subsidies
http://hdl.handle.net/10023/653
We consider an overlapping generation model based on Matsuyama (1999)
and show that, whenever actual capital accumulation falls below its balanced
growth path, subsidising innovators by taxing consumers has stabilising effects
and increases welfare. Further, if the steady state is unstable under
laissez faire, the introduction of the subsidy can make the steady state stable.
Such a policy has positive welfare effects as it fosters output growth
along the transitional adjustment path. Therefore, fast growing economies,
in which high factor accumulation plays a crucial role alongside innovative
sectors that enjoy temporary monopoly rents, should follow an unorthodox
approach to stabilisation. Namely, taxing the consumers and reallocate resources
to the innovative sectors.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000060/
2001-01-01T00:00:00ZAloi, MartaLasselle, LaurenceWe consider an overlapping generation model based on Matsuyama (1999)
and show that, whenever actual capital accumulation falls below its balanced
growth path, subsidising innovators by taxing consumers has stabilising effects
and increases welfare. Further, if the steady state is unstable under
laissez faire, the introduction of the subsidy can make the steady state stable.
Such a policy has positive welfare effects as it fosters output growth
along the transitional adjustment path. Therefore, fast growing economies,
in which high factor accumulation plays a crucial role alongside innovative
sectors that enjoy temporary monopoly rents, should follow an unorthodox
approach to stabilisation. Namely, taxing the consumers and reallocate resources
to the innovative sectors.On the persistence of output fluctuations in high technology sectors
http://hdl.handle.net/10023/652
Fatás (2000) argues that in a cross-section analysis of countries there exists a positive
correlation between long-term growth rates and the persistence of output fluctuations.
The current paper extends this line of research by examining manufacturing sectors of an
economy which can be characterised by two levels of technology; a low level and a high
level. Analysis of the data reveals a positive correlation between long-term growth rates
and the persistence of output fluctuations in ‘high-tech’ sectors. This empirical analysis is
further supported by reformulating the model of Matsuyama (1999b) in a stochastic
environment. Within this framework the model is able to capture the two main theories of
growth, namely; the Solow model and the Romer model. The stochastic nature of the
long run output trend is endogenous and based on technological shocks. Despite the
cyclical nature of the shocks we are able to show that output fluctuations are more
persistent in ‘high-tech’ sectors.
Previously in the University eprints HAIRST pilot service at http://eprints.st-andrews.ac.uk/archive/00000061/
2000-01-01T00:00:00ZLasselle, LaurenceAloi, MartaMcMillan, David G.Fatás (2000) argues that in a cross-section analysis of countries there exists a positive
correlation between long-term growth rates and the persistence of output fluctuations.
The current paper extends this line of research by examining manufacturing sectors of an
economy which can be characterised by two levels of technology; a low level and a high
level. Analysis of the data reveals a positive correlation between long-term growth rates
and the persistence of output fluctuations in ‘high-tech’ sectors. This empirical analysis is
further supported by reformulating the model of Matsuyama (1999b) in a stochastic
environment. Within this framework the model is able to capture the two main theories of
growth, namely; the Solow model and the Romer model. The stochastic nature of the
long run output trend is endogenous and based on technological shocks. Despite the
cyclical nature of the shocks we are able to show that output fluctuations are more
persistent in ‘high-tech’ sectors.