Sectors Matter!
.
Stefan Mann Editor
Sectors Matter! Exploring Mesoeconomics
Editor Dr. Dr. habil. Stefan Mann, MSc Federal Research Station Agroscope Agroscope, Socioeconomics Ta¨nikon 1 8356 Ettenhausen Switzerland
[email protected]
ISBN 978-3-642-18125-2 e-ISBN 978-3-642-18126-9 DOI 10.1007/978-3-642-18126-9 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011928712 # Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: eStudio Calamar Steinen Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Contents
Editorial: Why Should Sectors Matter? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Stefan Mann Part I
Sectors Matter for Society
On the Reality Behind Money . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Erik Ha¨ndeler Merit Sectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Stefan Mann Part II
Sectors Matter for Development
Economic Growth Through the Emergence of New Sectors . . . . . . . . . . . . . . . 55 Andreas Pyka and Pier Paolo Saviotti Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key . . 103 Kurt Dopfer Coordination on “Meso”-Levels: On the Co-evolution of Institutions, Networks and Platform Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Wolfram Elsner and Torsten Heinrich Part III
Sectors Matter in Practice
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Hiroshi Yoshikawa and Shuko Miyakawa
v
vi
Contents
The Mesoeconomics of Social Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Benoit Pierre Freyens Governmental Discrimination Between Sectors: The Case of Australian Water Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Lin Crase and Sue O’Keefe About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Editorial: Why Should Sectors Matter? Stefan Mann
In computational science, there is an ongoing debate about the appropriate level of abstraction (e.g. Kramer 2007; Abbott and Sun 2008; Blackwell and Green 2008). For economics, this debate is unfortunately missing, but would be a core debate for judging the potential importance of mesoeconomics. There is a consensus that for utility-maximization under scarcity (an abbreviated definition of economics), markets as well as economies need to be analysed (abbreviated definitions of micro- and macroeconomics). But how do we find out how useful it is, in addition, to consider the economic sector as a useful unit of analysis? How can we judge the potential of mesoeconomics? Observing the last 100 years of research history in economic science, we can trace a process of increasing abstraction. Apparently, there was an emerging consensus that for economic development, it would not really matter what exactly was produced, but the price and therefore the value of the output would provide enough information about generated utility. Table 1 illustrates this observation. It shows on the left hand how publications about single economic sectors evolved over the past 100 years. It shows that there was a maximum in the second half of the twentieth century. Since then, research has either stagnated (as in the case of the textile industry) or lost in intensity. The pattern of three economic terms that are usually used independently from single sectors looks rather different. Their documentation among economic scholars appears to be well on the rise at the brink of the twenty-first century. Another indicator for the disappearing focus on sector are economics textbooks. In Alfred Marshall’s (1890) seminal “Principles of Economics,” several chapters are devoted to industrial organization, with sub-chapters about mines and quarries or about education in art. Other chapters are concerned with the fertility of land, i.e. focusing on the primary sector. All of that has vanished in the modern standard textbook with the same title by N. Gregory Mankiw (2004). Sectoral considerations S. Mann (*) Agroscope, Ettenhausen, Switzerland e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_1, # Springer-Verlag Berlin Heidelberg 2011
1
2 Table 1 Number of citations (title or keywords) in the British library Shipping Textile Steel Capital industry industry industry accumulation 1910–1919 1 12 16 0 1940–1949 3 34 37 0 1970–1979 18 244 320 31 2000–2009 12 266 198 36
S. Mann
Monetary policy 0 8 254 1,609
Growth model 0 0 11 42
have been replaced by chapters about “The Real Economy in the long run” or “The Influence of Monetary and Fiscal Policy on Aggregate Demand.” The level of abstraction has risen, thus an analysis of sectors seems an unnecessary deviation from the general longing for growth and prosperity. Economic science as a whole may appear to be in the process of either forgetting about sectors or even explicitly neglecting their importance (Glowicka 2006; Srholec and Verspagen 2008). However, there always has been a strain of research that has focused its attention on the specific characteristics of sectors. In this school of thinking, sectors and their shifting importance are mostly used to explain certain growth patterns. Baumol (1967), for example, divides the economy into two sectors, a “progressive” one that uses new technology and grows at a certain rate and a “stagnant” one that uses labour as the only input and produces services as final outputs. He then claimed that the production costs and prices of the stagnant sector should rise indefinitely, a process known as “Baumol’s cost disease,” and labour should move in the direction of the stagnant sector. Likewise, Pasinetti (1993) emphasizes that knowledge is the key source of the wealth of a nation, but that the role of knowledge between sectors differs considerably. Empirically, Fisher (2006) shows that technology shocks account for 80% of business cycle variation in working hours and 38% in output Malerba and Orsenigo (1997) and Malerba (2004) summarize the different sectoral characteristics in terms of technology an innovation patterns as sector-specific technological regimes. Other economists are aware that demand is a key driver of structural change. Aoki and Yoshikawa (2002) are among the most influential scholars in this respect, showing how new products and industries create additional demand that, in turn, causes capital accumulation and growth in the respective sectors. This book attempts to make a case that a good deal of abstraction should be abandoned by focusing on sectoral development in order to grasp a greater share of what matters in the economy. Three levels of argumentation are used to make that claim: A societal level is used for those primarily interested in an interdisciplinary discourse, the level of development is devoted to readers wishing a continuation of the theoretical debate within economic research, and a practical level illustrates the relevance of sectors in the real economy. Two main arguments form the first part of the books arguing on the societal level, one rather descriptive and the other normative. Erik H€andeler describes the history of the last 150 as a history of sectoral developments. He shows that economic booms were usually connected with a major innovation boosting one particular sector. A lot of demographic observations or military conflicts can be
Editorial: Why Should Sectors Matter?
3
interpreted as following from such processes. I argue then in a second chapter that the sectoral composition of an economy is not value-free but may have significant impacts on the well-being of its population. Andreas Pyka and Pier Saviotti’s chapter which starts the second part of the book could be interpreted as a more formal and economic formulation of the claims made in the first part. He models the emergence of new sectors and the social changes that go along with that. He makes some reference to Schumpeter, but not as much as Kurt Dopfer in the subsequent chapter. Dopfer shows the causal links that exist between micro and macro development which do often necessarily take place on a sectoral level. Wolfram Elsner and Torsten Heinrich finish the second part of the book by describing the general mechanisms by which groups of an intermediate size interact. This serves as a significant help to understand interaction processes in sectors. Japan is certainly a good illustration for the claim that sectoral growth and contraction processes influence a large part of a country’s economic fate. This is shown by Hiroshi Yoshikawa and Shuko Miyakawa in the chapter which opens the third and most practical part of the book. Another useful illustration for the significance of sectors is the very special case of the social service sector. Ben Freyens shows us what the potentials and limits of standard economic approaches are when applied to this particular case. At the end of the book, Lin Crase and Sue O’Keefe present the case of Australian water policy and issue a warning: Not every government attempt to distinguish between sectors is efficient. Maybe it would be useful if governments would treat sectors more often equally. However, economists will certainly fare better if they distinguish more often between sectors than they currently do.
References Abbott R, Sun C (2008) Abstraction abstracted. Proceedings of the 2nd international workshop on the role of abstraction in software engineering, Leipzig, Germany, ACM New York, NY Aoki M, Yoshikawa H (2002) Demand saturation-creation and economic growth. J Econ Behav Organ 48(2):127–154 Baumol W (1967) Macroeconomics of unbalanced growth: the anatomy of urban crisis. Am Econ Rev 57:415–426 Blackwell AL, Green CG (2008) The abstract is ‘an enemy’. Alternative perspectives to computational thinking. Presentation at the 20th workshop of the Psychology of Programming Interest Group (PPIG 08) Fisher JDM (2006) The dynamic effects of neutral and investment-specific technology shocks. J Polit Econ 114(3):413–451 Glowicka E (2006) Effectiveness of bailouts in the EU. Discussion Paper No. 176. Wissenschaftszentrum, Berlin Kramer J (2007) Is abstraction the key to computing? Commun ACM 50(4):36–42 Malerba F (2004) Sectoral systems of innovation. Cambridge University Press, Cambridge Malerba F, Orsenigo L (1997) Technological regimes and sectoral patterns of innovative activities. Ind Corp Change 6(1):83–118
4
S. Mann
Mankiw NG (2004) Principles of economics, 3rd edn. Thomson, Mason Marshall A (1890) Principles of economics. Macmillan, London Pasinetti LL (1993) Structural economic dynamics. Cambridge University Press, Cambridge Srholec M, Verspagen B (2008) The voyage of the beagle in innovation systems land. Exploration of sectors, innovation, heterogeneity and selection. Working Paper Series 008-2008. United Nations University, Maastricht
Part I
Sectors Matter for Society
.
On the Reality Behind Money Erik H€andeler
Most schools of economics discuss the monetary symptoms of economic development. The economist Nikolai Kondratieff (1892–1938) proposed that money was the result rather than the cause of economic development: the latter was to be found in the sector-specific changes occurring in real life. In view of the financial crisis his perspective on sector-related real economics is an opportunity of opening up a fresh debate on the basic assumptions of economics. It is impossible to ignore the growing criticism of economics of being unrealistic, an ivory tower, intellectually dishonest. Its proponents are more disunited than virtually any other branch of science. Because mainstream economists neither foresaw the post-2008 financial crisis nor are really able to explain its deeper-seated causes, it is becoming easier for alternative theories to get a hearing. This not the inconsequential babble of a couple of theoreticians, but has to do with the question of how we visualise reality. The theories which the majority consider realistic are dependent on which economic policy is widely accepted, how we perceive the impending economic crisis, i.e. approach the turbulence in order books and on stock exchanges, how we gear our businesses, how we work. Future prosperity is determined by the quality of economic debate. Macroeconomists have hitherto perceived their discipline mainly through its monetary indicators. They discuss money: prices, interest rates, wages, taxation and state spending. At the same time, since Max Weber over 100 years ago, we know that prosperity is first and foremost a cultural achievement. The Russian economist Nikolai Kondratieff also thought that the reason for economic development was to be found in the changes of real life. It has to do not only with steam engines or computers, in other words tangible things, but with new patterns of success, organisational forms, management methods, educational requirements – changes in human beings’ heads and behaviour. Kondratieff, with his all-embracing
E. H€andeler (*) Lenting, Germany e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_2, # Springer-Verlag Berlin Heidelberg 2011
7
8
E. H€andeler
perspective, provides an approach to understanding how the virtual/monetary and real/material side of the economy are linked. It is a perspective which changes the perception of the unstable post-2008 world economy, leading to a different economic policy which begins with scarce factors of production.
Towards a More Holistic View of Economics For decades bankers were highly respected until, all at once, all over the world, they supposedly decided to become greedy and gamble away our prosperity. So they are blamed for the real economy suffering and fewer goods being bought – that is how the general public perceives the economic turmoil. But nobody goes on to ask why many companies and house owners failed to service their loans specifically from 2008 onwards, why bankers previously gave loans to people who could not actually afford them. Why then have interest rates stayed so low since the turn of the millennium? Afterwards, when the New Economy crashed and during the post2008 financial crisis, why did share prices and raw material prices go crazy? The answers are to be found in real life: since the 1970s the computer has hugely boosted our productivity, has saved working time and resources, thus making new investment profitable and creating new jobs. It worked for us until shortly after the turn of the millennium and subsequently in the emerging nations. Yet at some point every technology network underwent sweeping expansion. Anyone now wanting to invest money was no longer able to find lucrative opportunities. The over-supply of free capital put pressure on interest rates, which no longer yielded much – so money went into speculation on shares, raw materials or real estate and drove their prices to heights as yet unknown. If share prices rise within a short period this does not mean that the companies have increased in value (as we all believed), but that in real life there is nothing else which represents a worthwhile investment. The amount of free money stimulated irresponsible lending – the symptom of a structural cycle at its end. But it did not cause the economic slump. The bubble burst because there was a feeling in the real economy that the usual advances in productivity were absent. Prices and profits were subject to downward competition, it was hardly worth employing people or investing, the global economy faltered. Nothing out of the ordinary. There have always been times of uncertainty throughout history, whenever a technology network has spread extensively but the infrastructure and skills of the next technology network have not yet developed sufficiently. For instance, the Founders’ Crash of 1873 in the years after the railways were constructed, after electrification in the 1920s and after the Auto Boom into the 1970s. Later on, after the 1973 oil crisis more and ever better cars were indeed built. The driving force increasing productivity, however, was now the computer, with the aid of which cars could be built more cheaply, better and of superior quality – until recently. Since now ever more, ever better information technology can no longer noticeably improve any texts or designs, the economy
On the Reality Behind Money Steam engine Textile industry;
1780s
1840s
Railways Mass transport
9 Electric power Steel, chemistry, mass production
1890s
1940s
Motorcar Individual mobility
Information technology Structured information
1980s
Fig. 1 Schematic diagram of Kondratieff long waves. The Y axis symbolises economic dynamics, i.e. the rhythm of macroeconomic productivity trends. From: Erik H€andeler, Die Geschichte der Zukunft
will initially mark time until we succeed in climbing onto the next rung of prosperity using new tools and new skills (Fig. 1). For Kondratieff it is these historically pivotal inventions which carry the economy to a new level of prosperity – for example the steam engine, which was developed not for fun but out of economic necessity arising from the greatest shortage of the time, the shortage of mechanical power. When it drove spinning wheels they were 200 times more productive than a manual spinning wheel. Many more textiles could be produced at far lower prices; many more people could afford clothing. A new infrastructure was needed for this: steam engines need coal; more ore had to be mined and transported by canal. The workers became a relevant social class. Because of one invention there were so many new applications that the whole economy grew for 20, 30 years – until there was another limiting factor. If someone has five left shoes and seven right shoes, how many pairs does he have? Not six, only five pairs, because he cannot increase the number of left shoes in the short term. The same is true of factors of production, which do not all keep pace with the economy. At some point one in the relationship becomes so expensive that entrepreneurs no longer find expansion worthwhile, and that was the transportation of ore, coal and industrial goods after Kondratieff’s first wave. Productivity stagnated, also because the hitherto dynamically growing technology network had spread extensively. Entrepreneurs no longer made a profit and therefore had no reason to invest further and employ workers. A long and deep economic crisis followed. Although the high unemployment of those days has not been measured statistically, the novels of the time, for example those of Victor Hugo (“Les Miserables”) and Charles Dickens, have passed it down. Only after the 1840s, when large-scale railway construction took place, was the economy able to grow again. Transportation costs fell; trade and production became widespread, larger quantities paid off. Railways were not built because people no longer felt like travelling by coach, but because there were competing businessmen
10
E. H€andeler
who had to cut their costs, and in those days the greatest productivity reserves happened to be in railway construction. People do not like altering their behaviour, however, and that is why at the start they are always held back by the structures required for a new Kondratieff cycle. Princes did not want railways to cross their land boundaries – they were afraid of losing power. Medical experts predicted brain disease just from looking at a train travelling at more than 30 km/h; preachers fulminated that if God had wanted men to move on wheels he would have given them some. Until at some point the economic pressure was so intense that local businessmen got together and collected their money, rolled up their sleeves and built a railway to the nearest big town. As a rule the requisite changes do not come from above, or from art or the universities, but from economics. In the 1848 Revolution businessmen campaigned for freedom of the press and freedom of assembly in order finally to have a say in the investment decisions of the inefficient princely state. Although the revolution failed – the citizens got frightened of the workers and dutifully went back home – in return the German monarchs guaranteed free economic activity at last and the railways could be built. The economy set a breathtaking pace – for a quarter of a century – until the additional benefit of a further kilometre of railway again diminished. Since at the beginning of this Kondratieff cycle the big cities were linked to rural regions, fresh food could be delivered to the metropolises every day – and only then did large industrial enterprises become feasible, with armies of workers which had previously not been so easy to feed. At some point, however, only branch lines were built, which were not so economical. Productivity stagnated again, in Germany after the Founders’ Crash of 1873. Once again the world suffered a decade-long economic crisis which contemporaries perceived as the “deep depression”. Only when the next bottleneck was overcome, mass producing goods with the aid of electrical power, could the economy grow again. What a revolution: production no longer depended on hot, hissing steam engines, but power could suddenly be metered and carried silently to the precise spot on the factory floor where it was needed. Mass production became possible, electric power meant that steel manufacture improved, the chemical industry could really get going. The economy boomed from the 1890s into the First World War; the war sped up electrification. Yet at the beginning of the 1920s most of the factories in Europe and the USA were powered by electricity, by the end of the 1920s almost every household was connected to the electricity grid. Productivity therefore stagnated, accompanied by the usual symptoms: low interest rates, distribution battles, trade wars, falling prices, wages and profits. So the world economic crisis originally had nothing to do with the First World War – in that case things would have been bound to go downhill after the Second World War as well. Misery came because investment in electrification was by and large complete. In different countries the crisis lasted for different lengths of time, in many like France and the USA until the Second World War. Then innovations to do with individual mobility pulled the economy up again – the combustion engine and assembly line together with the ability to refine large quantities of petroleum energy
On the Reality Behind Money
11
cheaply. The German “economic miracle” did not really happen because of Ludwig Erhard, the Marshall Plan and the efficient Germans, but because during the Third Reich (for military reasons) they had already invested so heavily in new infrastructure: the autobahn, prisoners of war subsequently built more roads, soldiers sat the driving test. The whole national economy invested so much in factories making tanks and VW Kubelwagen (military utility vehicles), that after the war this technology network was efficient and economic enough to kick-start the economy (all over the world, for that matter, even in countries which were not involved in the war). Again transport costs fell, allowing far more goods than before to be distributed – for example in shopping centres on arterial roads. People were again freed from a lot of enforced social restrictions. Limitless individuality became possible, at least as an option. This was reflected in music, art, residential building, and family structures. The economy grew until the oil crisis of 1973/1974. Currencies tumbled, trade barriers went up again, zero growth and stagflation (from stagnation and inflation) seemed to become entrenched. But neither OPEC nor the Arabs were to blame for this crisis: the major motorways had been built, every middle class family had its car, the marginal utility of the automobile investment network diminished. For the first time, however, this Kondratieff downturn was very short: the American military had previously spent enormous amounts from the defence budget on computer development, defence and space travel. Soon government authorities were using the computer in administration, it diffused into the American economy. From kitchen scales (now a computer with a weight sensor) through word processing to robotic control, the technological principle is always the same: the entire world economy is carried by its advances in productivity. Until all the working operations which can be meaningfully taken away from humans have by and large been streamlined. The Russian economist Nikolai Kondratieff had already conceived this real economic perspective over 80 years ago. Although it was incorporated in the work of Joseph Schumpeter and became known in the West, it attracted hardly any attention compared with the monetary-mathematical models emerging in the 1940s.
The “Inner Laws of Socio-economic Development” For Kondratieff the theory of long wave cycles started with the question of why “the dynamic of economic life in the capitalist social order was not simple and linear, but of complex and cyclical character”1 – in short, why it fluctuated so dramatically. For when he investigated dynamic change in growth rates of the quantities and prices of several goods in England, France and the USA since the late 1 Kondratieff, N.D: Die langen Wellen der Konjunktur. In: Archiv f€ur Sozialwissenschaft und Sozialpolitik, 56 (1926), p. 573.
12
E. H€andeler
eighteenth century, in the early 1920s he found two and a half cyclical waves roughly 47–60 years long (see graph), including in coal consumption, interest rates, wages, bank deposits and production by individual branches of industry. His original statistical approaches are unimportant in explaining his theory – he linked long cycles to the increase in a “fund of interrelated capital goods” in real life, subsequently termed “basic innovation” by Schumpeter. For the sake of completeness his methodological point of entry will be explained below in a few paragraphs.
The major price index cercles general; of industrial goods; of agricultural goods
50 0 –50
50 0 – 50
50 0
1920
1900
1880
1860
1840
1820
1800
–50
Among the numerical series studied by Kondratieff were “net worth elements” such as rate of return on investment, wages, bank deposits; “mixed character elements”, i.e. those influenced by both “value” and “natural” factors, for example the full scope of foreign trade expressed in values, and “net natural” elements such as production in various branches of industry and the consumption of certain goods. In their “untreated” state the long waves would “not have emerged at all, or not clearly enough”. Kondratieff first converted the country data, some of which went back to the eighteenth century, into per capita per year so that the curves came as close to real growth as possible, even though he remarked in a footnote, “that – with a few exceptions – the same results are achieved without this division”.2 2 Kondratieff, Lange Wellen, p. 576. The US Kondratieff economist Brian Berry confirms this: the dynamic of his curve for overall US economic growth and that of the growth curve per capita show no essential difference. See Brian J.L. Berry: Long-Wave Rhythms in economic Development and political Behavior. Johns Hopkins University Press. Baltimore, Maryland and London (1991), p. 2.
On the Reality Behind Money
13
For Kondratieff the numerical series thus obtained were still combined variables from two factors: firstly from the “general trend” – in the discussion on the German economic miracle after 1948 the concept later turned up as “growth path” – i.e. from the long-term linear average of the numerical series. “In essence the cyclical is missing” in this general trend. He thought the other variable was “the acceleration of development”: the value of how strongly 1 year’s growth actually deviated from this “secular trend”. Both the change in the rhythm of ascent and its acceleration are fluctuating variables and reflect the changing economic climate. This, however, is the result of waves of varying length as well as of regional influences and global economic coincidences. In order to filter out long waves in the deviation of empirical numerical series from their long-term average, Kondratieff analysed the numerical series on a sliding average basis. To do this he selected a moving average of 9 years in order to cancel out the influence of short, medium and random fluctuations. Kondratieff dated the rise of his first wave between 1789 and 1814, so it lasted for 25 years. Its fall began in 1814 and ended in 1849, encompassing 35 years. The cycle of price movement was complete in 60 years. According to his data the second wave rose for 24 years, from 1849 to 1873, and then fell over 23 years until 1896. This price movement cycle took 47 years to complete. His third wave rose again for 24 years, from 1896 to 1920, and was in the process of falling while he was writing his articles. He thus identified two and a half long waves which fluctuated between 47 and 60 years. Kondratieff was unable to identify any long waves in French cotton consumption or in US wool and sugar production.3 By and large the numerical series of all the industrialised countries ran in parallel, but only by and large: in the USA the second long wave reached its upper turning point in 1866, shortly after the Civil War and much earlier than in Europe in 1873.4 Kondratieff knew that the data material was only a kind of smoke, not the fire itself: the causes of the economic situation would have to be sought “in the inner laws governing socio-economic trends”5 – what he meant is explained below. On the whole Kondratieff did not consider the existence of long waves to be proven because he was only able to study a 140-year period, but there was enough data to explain their cyclical character as very probable.
3
Long waves cannot be identified in saturated markets such those for sugar or basic commodities. Even in the 4th Kondratieff downturn households certainly did not economise on sugar because they had to make do on a low household budget. 4 Kondratieff, Lange Wellen, p. 578. 5 Kondratieff, N.D.: Die Preisdynamik der industriellen und landwirtschaftlichen Waren (Zum Problem der relativen Dynamik und Konjunktur). In: Archiv f€ur Sozialwissenschaft und Sozialpolitik, 60 (1928), p. 1–85, here p. 36.
14
E. H€andeler
In the synchronous waves, on the other hand, his critics saw only external coincidences, wars, revolutions or a new gold strike. Kondratieff countered that they were confusing cause and effect. No, wars did not initially influence the long economic cycle but, because (economic) power shifts occurred at times of “high tension in economic growth”, wars were mainly fought shortly before the peak of a long-term upswing (also for scarcer resources) – for example the Napoleonic Wars in the first (so designated by Schumpeter) Kondratieff cycle, the American Civil War and the European Wars of Unification in the second Kondratieff cycle, the First World War in the third Kondratieff cycle, and the Second World War and Cold War in the fourth Kondratieff cycle. Social upheavals, or revolutions, also occurred “most easily under the pressure of new economic forces”.6 His thesis can be verified by history: steam power enabled the French bourgeoisie to escape the domination of an incompetent aristocracy. In 1789 they swept the king away and finally used parliament and the free press to determine how tax money was to be invested – the populace, for whom only a scrap of bread was involved, tagged along. The European Revolution of 1848 was also a revolution by bourgeois and tradesmen, but it failed because the upper classes were afraid of the increasingly radical demands of the emerging working class. They therefore horse traded with their monarchs: no more revolution, in return for which the way was finally cleared for railways and cross-border liberal economies. Even the student unrest of 1968 took place during the most dynamic spread of the motorcar. To critics who asserted that new gold fields would stimulate a long wave, he replied that on the contrary, a booming upswing strengthened demand for gold and increased its price, thus making it once again economically viable to open up new mines. Fluctuations in gold mining, wars and revolutions, the integration of additional countries in the world economy and changes of technology after lean growth years were not, therefore, random new circumstances and events coming from outside; they were not the forces initiating movement. But they were characteristic features of the long-term upswing – which, once they had become reality, exerted a strong influence on the rhythm and direction of the economic dynamic. From the beginning Kondratieff embedded his theory in the interrelationships of society as a whole and wrote that the long wave was a fact “the impact of which is felt in all the principal areas of social and economic life”.7 This is confirmed by a look at real history: the Zeitgeist also predominantly follows long cyclical waves in its conservative (downturn) and liberal trends (upturn) – in the history of art, religiosity, electoral behaviour, even in the birth rate (at least in the industrialised First World).
6
Kondratieff, Lange Wellen, p. 594. Kondratieff, Lange Wellen p. 599.
7
On the Reality Behind Money
15
Berry (1997; Figure X)8 shows the number of live births in the USA in the twentieth century. The curve follows the long-wave dynamic: it was not even upset to any appreciable extent by two world wars. It stands to reason that anyone having to go through life on 6-month contracts is not so ready to marry and have three or four children. If, on the other hand, it seems obvious that there is always the choice of several well paid jobs, then people are more willing to marry and start a family – like those who became parents in the 50s and 60s. Long waves are therefore not only a process of economic reorganisation but one which affects the whole of society: reality is an entity. It is inconceivable that technical development should stand still while the economy booms and that at the same time a cautious pennypinching lifestyle should prevail in art and politics. Increasing prosperity affects behaviour. In the long upswing there tends to be widespread optimism in art and politics, but more pessimism in the long downswing. If you think your job is secure you are readier to take out a long-term loan for the house or the car. The easier it is to earn an income, the less need there is to struggle and the easier-going the attitude to life as reflected in every sphere. Biedermeier and Romanticism held sway in the recession of the 1820s, historicism in the 1880s – houses were built like castles, poems written about powerful princes from earlier epochs. Art Nouveau, on the other hand, with its flowery ornamentation and liberal ideas – as in Frank Wedekind’s play “Spring Awakening” – took place in the vigorous upswing during electrification. Even the period of the Beatles with its laid-back music was associated with the car boom: you could wear your hair as long as you wanted because it did not matter what teachers or instructors said, because everyone got a job and in any event the economy was growing by 8% every year – not because people were so efficient, but because the technology network surrounding the car was expanding so strongly. Trade unions 8 Brian J.L. Berry: Long Waves and Geography in the 21st Century. In: Futures, Vol. 29, No. 4/5, pp. 301–310, (1997).
16
E. H€andeler
have always had their way in a long upturn; in a long downturn, on the other hand, workers have been stripped of their rights. And ecclesiastical history too can be traced along Kondratieff waves: the optimistic spirit of the second Vatican Council when the Catholic Church was reformed could only happen against the background of the car boom, which first permitted individualism with its critical reflection. The causes of long-term economic movements are more deep-seated, wrote Kondratieff. Although revolutionary new technologies carried long waves, even they were not random. On the one hand, discoveries and inventions were made in a direction and with an intensity corresponding to the requirements of practical reality9 – after all, the same discoveries were often made simultaneously in different places (later on, for example, the computer). On the other hand, if the economic prerequisites are not there it is not enough for the scientific and technical conditions for a new production technology to be present – Leonardo da Vinci’s inventions of 1500 had no chance of being taken up in their socio-economic environment. An innovation can only have an impact when it provides greater benefit and can be afforded by an increasing number of people. No, long waves sprang from “the essence of the capitalist economy”, as Kondratieff put it in another essay in (1928).10 Money flowed in the direction where it could earn the most, where “production costs in their real-physical expression” fell because that was where a new “fund of long-term capital goods” increased productivity and provided work and new prosperity. But one thing at a time: all branches of the economy are linked, each one is directly or indirectly a sales market for the others. Wages and profits earned in one sector are spent again in others. Now, although Kondratieff established that different price ranges fluctuate in a similar way, they rise at different angles, turn out to be weaker or stronger, many experience a time lag. “This means that changes in individual branches and elements of the national economy are not totally consistent.”11 There are changes not only in the general economic climate, but also in the extent and price relationship of different sectors of the economy to one other. The number of goods a sector produces and the strength of its growth depends upon how many factors of production are available to it, which in turn depends on the possible profit. If the costs in one line of business fall further than in other sectors of the economy, it will become more profitable for investors. The result will be “that (this sector) attracts a relatively greater amount of capital and its production increases in absolute and relative terms”.12 Capital will flow into this sector until the rate of profit in all the economic sectors balances out again, but the
9
Kondratieff, Lange Welle, p. 593. Kondratieff, N.D.: Die Preisdynamik der industriellen und landwirtschaftlichen Waren (Zum Problem der relativen Dynamik und Konjunktur). In: Archiv f€ur Sozialwissenschaft und Sozialpolitik, 60 (1928), pp. 1–85. 11 Kondratieff, Preisdynamik, p. 7. 12 Kondratieff, Preisdynamik, p. 8. 10
On the Reality Behind Money
17
additional capital ensures higher production in this one sector. This would apply equally to trade. The actual profitability of a sector, however, depends on how much more productive it becomes – for profits then increase correspondingly due to better production methods, means of communication and organisational procedures. The steady rise in the productivity of labour – “a worldwide process” – was the most important factor in reducing real production and transport costs and changing structures. One business implementing an innovation forces others to adapt – not only competing businesses but the environment as well, for example by pulling in workers to whose needs other sectors of the economy adjust. With innovations the same labour creates more output and reduces costs in domestic and foreign competition. This is ultimately expressed in falling cash prices for goods, which helps market a greater quantity.
Kondratieff’s Concept of the “Real Cost Limit” Productivity grows, not uniformly but dynamically, i.e. with an accelerating or decelerating rhythm. And at some point that is it: productivity stagnates, entrepreneurs compete away each other’s profits and have less and less room to manoeuvre during price negotiations. According to Kondratieff the reason is that factors of production are indeed duplicable in the long term, but are limited in the short term, especially real capital. By this Kondratieff means not “the monetary expression of production”, but “the real-physical expression of production costs”,13 in brief the real cost limit which throttles further growth. A graphic example of this is the shortage of means of conveyance which restricted continuing economic growth in the early nineteenth century. Nor could this bottleneck be removed by additional coaches (in real terms too expensive by comparison with the return; the time and resources, for example horses, were in competition with use in other sectors, for instance in agriculture, where they were generally of greater utility). The bottleneck could only be removed by a seminal invention like the railways (subsequently called basic innovations by Schumpeter) with the associated infrastructure to solve the problem with higher quality. But these “goods of long-term utility”14 are not produced from 1 day to the next. In order to produce them society needs relatively long periods extending beyond the scope of the usual commercial and industrial cycles. It takes decades for a technology to mature, for the general public to be persuaded by it, for sufficient investment in the infrastructure and for enough trained specialists in the new basic technology to become gradually available. Once established it generates associated innovations which fuel the economy still further. That is why the cycles described by 13
Kondratieff, Preisdynamik, p. 20. Kondratieff, Preisdynamik, p. 36.
14
18
E. H€andeler
Kondratieff last for between 40 and 60 years. The pattern of long waves is determined by the rhythm and volume at which the new technology network penetrates society, makes individual sectors more productive and hence creates income, thus stimulating economic growth in other sectors as well. They therefore stem from investment in the infrastructure surrounding a new “fund of long-lasting capital goods”. This does not grow steadily or uniformly, noted Kondratieff – essentially why the economy fluctuates even within the long cycle and sometimes even falters at the time of most explosive growth (as during the 1844 railway boom, electrification in 1904, the computer boom of 1987 and 1993). Producing this kind of capital goods calls for “a huge expenditure of capital” in the very long term – one only has to think of the great investment necessary for the building of railways across the country. Even the computer was not just developed and made accessible to working people, but a lot of money and a long learning curve was needed so that they could handle the new technology. In the upswing the long wave therefore needs enough sufficiently cheap loan capital and a low price level in order to stimulate long-term investment. So long periods of crisis with almost zero interest rates and minimal profits also have their positive side: this pressure forces society to pull itself together and change its structures. In this situation, says Kondratieff, more and more will “sooner or later” be invested in the basic new capital goods, giving rise to a long upward cyclical wave. During its course capital becomes capital increasingly scarce and more expensive. This trend intensifies whenever domestic or foreign policy conflict erupts, resources are unproductively consumed, economic potential is destroyed. All this combined brings the wave to a standstill and puts it into reverse: interest rates fall, the rhythm of production and trade slows down, prices drop. In the ensuing period of economic stagnation foreign policy and domestic social relations would be pacified. At the same time savings activity would rise, the preconditions for a new long-term upswing would be recreated.15 Even in Kondratieff’s time critics denied the regularity of long waves, thus banishing them to the realm of coincidence. Today the same objection is put forward by those who have never read Kondratieff in the original. His starting point was not 50-year regularity, as critics assert. “If repetition at regular intervals of time is meant by regularity, long waves can be denied just as little as medium ones. There is absolutely no strict periodicity in social and economic phenomena – not even in medium waves. Their length varies at least between 7 and 11 years, i.e. by 57%. The duration of the major cycles observed varies between 48 and 60 years, i.e. by only 25%. If homogeneity and contemporaneity of fluctuation in the different elements of economic life is meant by regularity, then it is present in long waves to the same degree as in medium waves. Finally, if regularity means the fact that medium waves occur internationally, then long waves are no different.”16
15
Kondratieff, Preisdynamik, p. 38. Kondratieff, Lange Wellen, p. 592.
16
On the Reality Behind Money
19
A new upturn after the ebbing of a wave was not inevitable, however. When a new cycle began it did not represent an exact repetition of the previous one, for the national economy had already scaled a new level. The mechanism, however, remained essentially the same in the new cycle.17 Ultimately the duration of a cycle, particularly the depth and length of the following recession, depends on how quickly a society taps the next structural cycle – an unpredictable human and political factor. If what he had developed was correct, he wrote at the end on the essay on Price Dynamics in 1928, then “the sources of the depressed condition . . . prevailing in the global economy are not by any means exhausted”. The following decade was to show how right he was.
Why the Most Tragic of all Economists Was Killed By contrast with his Marxist critics like Leon Trotzky or Eugen Varga, Kondratieff did not therefore assume that the economic slump following the First World War had initiated the “period of general decadence and the demise of capitalism”, but was the result of a cyclical long wave drawing to a close. He was to pay for this with his life. For while in the post-1929 world economic crisis the Marxists believed that the predicted collapse of capitalism had arrived, Kondratieff disagreed: no, they were only experiencing a deep dip between two long structural cycles. For Stalin a concept according to which capitalism could have prosperity following depression was a priori counter-revolutionary. Pity about the great career which had started so promisingly. Perhaps the world would have been spared many economic mistakes had Nikolai Kondratieff survived the Stalinist period. Nikolai Kondratieff, the son of a simple peasant, was born on 17 March 189218 in the village of Galuevskaja in the central Russian province of Kostroma, about 320 km north-east of Moscow. After primary school there was no money for higher education – so he read up the material himself and in 1911 passed the secondary school leaving examination without ever having been to class. While still a teenager he campaigned for democracy and the socialist party, was arrested by the Imperial police in 1905 and 1906. As a student at the University of St. Petersburg Faculty of Law he taught workers in his spare time so that they too could be politically emancipated. When in 1913 the princely house of Romanov celebrated the 300th anniversary of its accession to the throne he demonstrated against the monarchy – and was arrested again. After successfully completing his studies in 1915 Kondratieff worked in the administrative department of a St. Petersburg district. As a 25 year-old he was involved in the 1917 February Revolution which deposed the tsars, he wrote articles analysing the food situation, was elected as a member of the Constituent Assembly and served in the Kerenski government as deputy Minister for Food. In the October 17
Kondratieff, Preisdynamik, p. 38. 4 March in the Gregorian calendar is 21 February in the Julian calendar, but the complete English edition of “The Works of Nikolai D. Kondratiev” gives his date of birth as 17 March.
18
20
E. H€andeler
Revolution this was forcibly dissolved by the Bolsheviks – Kondratieff again briefly landed in jail. He then went to Moscow, where in 1920 he set up his Institute of Conjuncture and drew up the Five Year Plan for Agriculture. He argued the case for market structures and for only collectivising agriculture later, when sufficient capital would be available for machinery. He said that until then the state should permit individual farmers to work for their own economic advantage. Although his ideas were increasingly rejected by the party’s Central Committee, he continued to express criticism of government policy. In 1928, when Lenin’s rather market economy-oriented “New Economic Policy” (NEP) was again replaced by a planned economy, Kondratieff had to resign his post as Director of the Moscow Institute of Conjuncture and the Institute was closed. Because the Communists could not tolerate his interpretative competition, they arrested him in 1930. Kondratieff was held in solitary confinement in Suzdal, 180 km east of Moscow. Cut off from academic life and condemned to suffocating monotony, he disintegrated intellectually and physically. In the silence he became almost deaf, steadily lost his eyesight, and was tormented by insomnia and headaches. The books which he still wanted to write, the theories he still wanted to develop were never to be. His work, which he knew was something substantially new, seemed lost. Letters to his wife Evgeniya, published in the two volume Collected Works,19 record his despair. “All the new and objectively potentially not uninteresting ideas which I had and which were dawning on me are bit by bit being consigned to the grave,” Kondratieff wrote to his wife on 28 March 1938. During the Stalinist “Purge”, after 8 years in prison, he was sentenced to death and shot on 17 September 1938.
The History of an Economic Theory All that is known of Kondratieff in the West are translations of incomplete partial versions of the original text, while his extensive complete oeuvre has been ignored for 6 decades. “That is why Kondratieff waves have since been discussed by authors who were totally unfamiliar with the most important aspects of Kondratieff’s texts”, say Kondratieff economists Francisco Louca and Christopher Freeman.20 The “Great Soviet Encyclopaedia” subsequently called the long-wave theory an “ordinary bourgeois theory of crises and economic cycles”: “The concept of long waves contradicts the fundamental Marxist thesis on the inevitability of economic crises in capitalism, and conceals the irreconcilable contradictions of the capitalist society”.21 Approximately one Kondratieff wave after his death, in October 19
“The Works of Nikolai D. Kondratiev”, two volumes, London (1998), Pickering & Chatto, 650 pages. 20 Freeman, Christopher; Louca, Francisco: As Time goes by. Oxford University Press, New York (2001), p. 70. 21 Quoted in Brian J.L. Berry: Long-Wave Rhythms in economic Development and political Behavior. Johns Hopkins University Press. Baltimore, Maryland and London (1991), p. 37.
On the Reality Behind Money
21
1987, the Soviet Union finally publicly rehabilitated the economic researcher. He would probably have been long forgotten in the West had the economist Joseph Schumpeter not named the long cycles after Kondratieff in 1936.22 Even Schumpeter did not look at reality through macroeconomic statistics. He thought that competition in quality and production methods was more important than the price competition debated by economists. Almost never, wrote Schumpeter, were changes in production or goods forced by consumers, for example because their taste or requirements had changed. Innovation happened because of creative and dynamic entrepreneurs – by which he meant not the purely administrative “proprietors” of established run-in sectors, but innovative personalities.23 These “entrepreneurs” in the true sense of the word competed down less innovative firms. Changes were therefore triggered on the production side – where Kondratieff saw a change in the price relationships between different factors of production.24 Although Schumpeter praised the “silvery clear reasoning” of the mathematical methods propounded in economics by Leon Walras, he thought they were only a skeleton, a basis for economic analysis. They did not account for the causes of growth and cyclical fluctuations. Although Schumpeter gained recognition, he lost the competition with his contemporary Keynes. When the latter published his “General Theory” in 1936, Schumpeter responded to his concept of demand management with the story of the French king Louis XV, who asked his mistress Madame Pompadour to spend as much money as possible in order to increase effective demand and prevent a depression. Sarcasm did not help against the Keynesians’ promising though false message. They declared that they had abolished long cyclical waves, that economic conditions could after all be “designed” as technocratically as building a car using demand policy and money supply. So Kondratieff’s theory – more radical than all the others – vanished with the triumph of Keynsianism. The victor carries the day. In reality Keynesian economic policy was only any good while the car cycle was in boom. The mainstream of neoclassical synthesis also eliminated most of the alternatives and spread the belief that the economy could continue to grow exponentially and limitlessly providing there were shrewd enough econometricians to organise the flow of money in the national economy. Kondratieff cycles or structural change were thus considered to be a touch esoteric and as not making much sense. After the 1974 oil crisis, when Kondratieff’s theory again appeared to be of interest as an explanatory model, a large number of young postgraduate students (today they are professors or political advisers) set out enthusiastically to prove the long wave theory. They found to their dismay that a long wave which slides sinusoidally through world history is not so easy to see in monetary indicators. So Kondratieff’s theory, which nobody read in the original any more, turned into a neat marginal school of thought from which most scientists distanced 22 Schumpeter, Joseph A.: Business Cycles. New York (1939). German translation: Konjunkturzyklen. Two volumes. G€ ottingen 1961. 23 Schumpeter, Business Cycles, pp. 93–101. 24 Schumpeter, J.A.: Theorie der wirtschaftlichen Entwicklung. 6th Edition, Berlin (1964), p. 100 f.
22
E. H€andeler
themselves. Like the Loch Ness monster it was talked about, but could not be proven. This was because long waves were looked for in places where they could not be found: complicated mathematical procedures were used to try and find them in macroeconomic numerical price series, interest rates or social product, something which was not entirely successful.25 Yet these methods are ultimately an exaggeration of current economics reduced to the mathematical small-scale. They are as laborious and productive as learning the telephone directory off by heart: if annual turnover in the building sector is 30 billion euros less, but 30 billion euros more in the health sector, this cannot be inferred from the gross national product. Some years ago a PC with a Pentium processor might have cost 1,000 euros or dollars, today you can get much more powerful computers for 400 euros in a food discount store. Established macroeconomics are in an outstanding position to give a realistic description of models of a national economy in which a VW Beetle is bolted together for decades with no great changes, but there are no structural or qualitative variables for today’s multidimensional dynamic trends over time. An economic science which views the world primarily from the perspective of macroeconomic “watering can” variables will only develop formulae for politicians which stick on the macro level alone – and fizzle out there. More realistic is Kondratieff’s theory’s view of the innovation level – of the real trends in society and the economy, with their productivity trends and shifts in cost limits. This does not simply mean adding up the number of patents in individual years – studies trying to identify the Kondratieff cycle in this way were also bound to lead up a blind alley because the number of innovations does not necessarily say anything about the economic potential for savings and growth.26 And not every idea, not every invention becomes an innovation: it must also actually translate into a product on the market. The innovation researcher Christopher Freeman of Sussex University acquired international recognition when he said goodbye to the search for long waves in numerical sequences – these were not comparable over the longer term. Nothing in the British national product could be inferred from mass unemployment at the end of long waves, for example in the 1830s, whereas the mass misery of the time has been passed on vividly in literary material. Freeman linked up with Schumpeter and the concept of basic innovation. At the time, in the early 1980s, he accused the western industrial countries of failing to recognise the true nature of the challenges: with information technology it was becoming extremely cheap to store, process and transfer information. This was expressed not only in new machines, but in a completely new technico-economic paradigm. So Kondratieff’s perspective of 25 I.a. Metz, R.: Ans€atze, Begriffe und Verfahren der Analyse o€konomischer Zeitreihen. In: Historical Social Research – Historische Sozialforschung, 13, (1988), 3, pp.103. And: Metz, R.: Kondratieff and The Theory of Linear-Filters. In: Vasko,T. (Ed.): Konjunktur, pp. 343–376. 26 Mensch, G.: Das technologische Patt. Innovationen € uberwinden die Depression. Frankfurt/Main, (1975). And: Mensch, G., and Schnopp, R.: Stalemate in Technology, 1925–1935: The Interplay of Stagnation and Innovation. In: Schr€ oder und Spree (Ed.): Historische Konjunkturforschung, pp. 60–74.
On the Reality Behind Money
23
socio-economic reality was disinterred and translated to the present, particularly by Carlota Perez of Caracas.27 Long cycles are a process in society as a whole. Reality is indeed a whole, but its subsystems change at different rates. New, problemsolving technology is developed faster than the structures of society can adjust to them: this “mismatch”, the disharmony between the technico-economic and the socio-institutional system accordingly causes the congestion of productivity which keeps economic growth low for years until a social consensus has formed on where the journey is leading. Kondratieff had already described this too in other words: long cycles cannot be comprehensively measured by historical statistical series, but on the level of innovation. Basic innovation, which carries advances in productivity and hence the economy, and turns social structures upside down, can be tracked on the market. It develops over the decades in the form we know from the usual product life cycles. Its life cycle also assumes a long drawn-out S-shaped curve on market launch, strong then slower growth, and saturation. The crisis years began whenever growth rates weakened. Cesare Marchetti and other researchers at the International Institute for Applied Systems Analysis (IIASA) in Laxenburg/Austria have demonstrated this in numerous examples, for example the S-shaped expansion of the motorcar in Italy in the fourth Kondratieff cycle.28 When a society has decided to use a new technology or a new product, the market opens up along the S-shaped curve – which makes further trends at least assessable. In their work “As time goes by” Freeman and Louca demonstrated the history of the previous five Kondratieff cycles along the S-shaped expansion of the relevant basic innovation.
This graph shows how electrification spread in US industry (Y axis: percentage of firms with electricity). The first electric motors were installed in factories over 27
Perez, Carlota: Structural change and assimilation of new technologies in the economic and social systems. In: Futures October (1983), pp. 357–375. 28 I.a.: Marchetti, Cesare: The Automobile in a System Context. The past 80 Years and the next 20 Years. Technological Forecasting and Social Change (1983). Vol. 23, pp. 3–23.
24
E. H€andeler
20 years after Werner von Siemens developed the electrodynamic principle. The pioneers were derided, until around 1900 it became clear that this was the technology of the future. Within a few years, by roughly 1920, every factory had electric power, by the end of the 1920s almost every household was connected to the electric grid. Over time the savings achieved in time and resources used for innovation petered out. There were virtually no profitable investment opportunities left, so interest rates plummeted towards zero, reckless lending took place. In particular tangible assets such as shares, raw materials and real estate experienced a price bubble, while thanks to low interest rates the consumer markets were opened up at a flying pace and were soon saturated – the background to the 1929 economic crisis and stock market crash.
World crude oil extraction in millions of tonnes also illustrates the S-shaped expansion of the automobile in industrial countries – an indicator of the fourth Kondratieff cycle. At first it was only a millionaires’ plaything, after the First World War the car made an increasingly frequent appearance on the roads and in cinema films, until during the Second World War so much was invested in tank factories, hydrogenation plant and the production of synthetic rubber for tyres, soldiers in transit learned to tinker with their vehicles and prisoners of war built more roads, that after the war an economic miracle was sparked off because the car was now efficient enough to spread throughout society. So Kondratieff cycles really do occur – where businesses and society are restructured in order to tap the potential utility of a new basic innovation; where they determine the rhythm and direction of innovations for decades, where they save enough resources or facilitate new ones to unleash a volume of sales in all markets combined which is strong enough to carry the whole economy. Whenever a society decides to use a new technology or product, the market opens up along the S-shaped curve of basic innovation (in real unit numbers or railway kilometres but
On the Reality Behind Money
25
Stages of prosperity
Productive knowledge handling Information technology Motorcar
Prosperity level
Electric power Railways Steam engine
1780s
1840s
1890s
1940s
1980s
2010s
Fig. 2 An aid to understanding Kondratieff cycles – as stages of prosperity. From: Erik H€andeler, “Die Geschichte der Zukunft”
not in turnover, firstly because over time this falls in proportion to performance, and secondly because it says nothing about the turnover triggered in other sectors). The crisis years always begin whenever growth rates falter. The first extrapolation of the data of an S-curve produces a bell-shaped curve which maps the growth rates of the sector supporting the cycle. Anyone again extrapolating this bell curve, i.e. making the second extrapolation of the original data, gets a sine curve, which represents the rhythm of the spread of a basic innovation and the dynamic triggered by it in the economy as a whole, i.e. the growth of the growth rates.29 Kondratieff cycles can therefore be cast as a scientific graph – the schematic sine curve running through world history is ultimately a sequence of individual sine curves to each of which belongs an expanding S-shaped basic innovation together with its associated structures. This can be traced in detail throughout world history. And it helps to analyse and shape the economic situation (Fig. 2). Historians and economists have parted company since the triumph of “more scientific economic history” and the mathematical econometrists. History used to be the story of kings, statesmen, generals and institutions; it still falls short of being the story of real people, their living and working conditions, the technical changes implemented by bold entrepreneurs and how that changed the world. In Kondratieff’s theory both have the opportunity of combining and thereby benefiting society, for they illuminate each other: no schoolboy should leave history lessons, no historian should leave university without having understood basic economic
29
Leo A. Nefiodow: Der Sechste Kondratieff. Rhein-Sieg-Verlag, Sankt Augustin 2000, 4th edition, p. 221, although here as turnover and not a real variable.
26
E. H€andeler
relationships; but first and foremost no economist should be let loose on humanity in future without having learnt the lessons of history.
The Significance of Kondratieff Cycles for Power Politics The political debate would alter radically if economics were viewed in this way. Each cycle has its own pattern of success, with matching company structures, management methods and curricula. This gives rise to a ground-breaking thesis: it is always the countries and regions which are best at developing and applying the relevant basic innovation of a Kondratieff cycle and the associated social rules of the game which have the most resources to solve their problems, live in prosperity, are economically viable and therefore both militarily and politically successful. Peter Bairoch (1982)30 confirms this thesis with his study on how the relative share of global industrial production in certain countries and regions changed in the nineteenth and twentieth century. 1750 1800 1830 1880 1900 1913 1953 Great Britain 1.9 4.3 9.5 22.9 18.5 13.6 8.4 Hapsburg Empire 2.9 3.2 3.2 4.4 4.7 4.4 – France 4.0 4.2 5.2 7.8 6.8 6.1 3.2 Germany 2.9 3.5 3.5 8.5 13.2 14.8 5.9 Italy 2.4 2.5 2.3 2.5 2.5 2.4 2.3 Russia 5.0 5.6 5.6 7.6 8.8 8.2 10.7 Europe 23.2 28.1 34.2 61.3 62.0 56.6 26.1 USA 0.1 0.8 2.4 14.7 23.6 32.0 44.7 Japan 3.8 3.5 2.8 2.4 2.4 2.7 2.9 Third World 73.0 67.7 60.5 20.9 11.0 7.0 6.5 China 32.8 33.3 29.8 12.5 6.2 3.6 2.3 India (Pakistan) 24.5 19.7 17.6 2.8 1.7 1.4 1.7 Peter Bairoch: “International Industrialization Levels from 1750 to 1980”, pp. 292–304.
1980 4.0 – 3.3 5.3 2.9 14.8 22.9 31.5 9.1 12.0 5.0 2.3
Around 1750, when industrialisation had not yet begun, Great Britain was just a lump of rock in the North Sea. At the time the British did not even account for 2% of world industrial production – matching the rest of the world relative to its population. But then something happened to upset the global equilibrium: James Watt, the steam engine, its use in the textile and iron industry. The English were at their most productive with this system, their economy showed the strongest growth. After the first long wave in about 1830 they were manufacturing approximately 10% of the world’s industrial production. And then they were the first to build the railways from 1825 onwards, allowing them to expand trade and commerce far
30
Bairoch: “International Industrialization Levels from 1750 to 1980”, pp. 292–304.
On the Reality Behind Money
27
afield. After the second Kondratieff cycle in 1880 the British suddenly had almost a quarter of world industrial production to distribute. At the time, of course, they could afford to be saddled with a costly, economically absurd colonial empire, to kit their army out with state-of-the art equipment and send their ships to sail the seven seas. But we must remember that in the nineteenth century the English were not rich and powerful because the issue bank cut interest rates, or because wages were so high or low, or because tax reform was brought forward – the abundantly irrelevant themes of our economic debate – but because they were best at applying and implementing the seminal invention of that period and its associated social structures. The English aristocracy of 1800 was ready to embark on entrepreneurship at a time when the German aristocracy was still dreaming of chivalry and looked down on people who made their money from trade. In other words, the ideas of what we consider to be important and desirable in our lives influence our actions, and hence also our economic dealings. After these two structural cycles the English adhered to their accustomed patterns of success. In 1880 a young British businessman would have said to himself that his father and grandfather had always made money from steam engines and railways, so he would do exactly the same. That is human: everything we learn is an expensive investment and none of us wants our hard-won learning to be devalued, so we would rather adapt our perceptions than revise our opinions. People are generally not willing to change for the sake of change, especially if they have previously been successful in a specific way. The Germans, on the other hand, the industrial later developers who in previous decades had only produced 3% or so of the world’s industrial goods, backed electricity, the new basic innovation in the third Kondratieff cycle. That is why we have names like AEG, Siemens, IG Farben, and that is why the Germans caught up. In 1913, with almost 15% of world industrial production, they overtook England, which still only manufactured 13% of goods worldwide. Although the English tried to fight back against German export goods with the “Made in Germany” seal, it did not help them because a German entrepreneur who ran his factory using electric motors was much more productive than an Englishman, no matter how refined or high-performance his steam engine. Russia, which despite its size and population numbers had previously never managed more than 5% of world industrial production, was able to become a world power after the Second World War and in 1980 accounted for almost 15% of world industrial production. Why? Because the fourth Kondratieff cycle was about using cheap energy from oil. At the time the old Soviet Union still had huge energy reserves, a large part of which were very near the surface and were cheap to extract. This is why the Russians were able to conquer space, support Cuba and lead the arms race with conventional weapons. But at the instant when this paradigm was exhausted after the oil crisis and economic growth no longer depended on the continued consumption of yet more oil and natural gas, when prosperity was dependent on more efficient handling of explosive amounts of data as implemented in machines and information processing by the computer, then the Soviet Union together with the Eastern Block and former GDR was bound to collapse because the rigid structures of communist society were unable to exploit this paradigm.
28
E. H€andeler
Japan, on the other hand, previously responsible for only between 2 and 3% of world industrial production, took off in the 1970s and 1980s and around 1980 was already producing approximately 10% of all industrial goods worldwide. The deciding factor here was neither working time (they spent more time together, but chatted) nor wages nor capital costs. From the perspective of Kondratieff theory the Japanese took off in this period because they made the best use of the basic innovation in the fifth Kondratieff cycle: they took the lead in refining, manufacturing, exporting and applying information technology, used computers to manufacture cars more cheaply and with fewer defects than those still putting cars together manually on an assembly line. In Germany and Europe, on the other hand, the computer met social resistance: “job killer computers”, “the wired society”, “George Orwell’s 1984”. Because at first they hesitated to use computers, Europeans fell behind in productivity and suddenly had an unemployment problem because they were not productive enough. It could have been a different story. A German, Konrad Zuse, invented the computer in 1944; it was mainly the US military which pressed ahead with its development – for defence and space travel. Back in 1969, however, the Japanese decided to moderate computer development among its companies and forged ahead until 1980. At the time a German engineer was better educated and more creative than a Japanese engineer. But ten Japanese engineers together were much more productive than ten German engineers, who failed to keep each other informed, were bad at listening and cooperating, whose differences of opinion degenerated into power struggles which were still unreconciled by the time they retired. Europeans worked too slowly at the time. In the 1980s, when the Japanese brought a new generation of chips to market, they could ask for 32 dollars; when Siemens came on the market 2 years later with the same generation all they fetched on the world market was eight dollars and they were unable to recoup their development costs. This was not due to prices, interest rates or wages, but to a social attitude characterised by different ethical roots: in Japanese society Confucianism, Buddhism and Shintoism created a group ethic which encouraged collaboration as a group, while those outside were fought mercilessly. The West, on the other hand, developed an ethical system which always saw things from a more individual perspective. At a time when it was completely normal to buy and own other men as slaves, the Old Testament Jews suddenly arrived and said that man was created in God’s image. Every individual had a very special, inalienable dignity, whether young or senile, ugly or beautiful, rich or poor. Although Christianity is a universal ethical approach (“Love thy neighbour as thyself” ), it brought a sense of individual worth into the world in order to make a universal ethic possible at all. Individualism continued to spread, particularly after the Enlightenment. This forced ideas of group ethics into the background, but also drew on elements of universal ethics. Individual liberties and rights took centre stage, leading to the trend experienced on motorways today: foot to the floor in the outside lane, freedom of the road for free citizens. Brilliant individual achievements can occur in a culture like this, but individualistic cultures are bound to decline as soon as survival depends on bringing
On the Reality Behind Money
29
several individual jobs together. Even cultures centred around a group ethic fall behind in the race for productivity, for with a closed hierarchical task force of loyal feudal subjects it is impossible to collaborate with constantly changing partners, customers and suppliers. That is one of the reasons why countries like Japan and other Asiatic Tiger States suddenly found themselves with economic problems after the wave of computer hardware success. The prediction made by Leo Nefiodow31 back in the mid 1990s came true: after the millenium the time of its steepest S-curve rise on the market was behind the computer. As far as over-the-counter PCs were concerned, existing older computers were mostly being replaced and there was hardly any additional productivity. Economic growth in the fifth Kondratieff cycle was not ensured by information technology sales, for example purchase turnover and call charges in the case of the mobile phone, but the fact that someone moving around with a mobile could use time more productively for work, arrange appointments more efficiently at shorter notice. The growth effect for the economy in Kondratieff’s theory was the time saved, the extra work made possible. After the resources saved as a result of information technology declined, however, it meant that we were entering a period of falling profit margins, falling real wages and vigorous distribution battles. Of course it is fun to be a politician at a time when a new technological skills network is expanding and the economy seems to be perpetually growing – you can then persuade voters that prosperity is linked to your brilliant achievements. In a long downturn, on the other hand, we have to solve an increasing number of problems with a steadily decreasing number of resources at our disposal – and nobody is keen to accept responsibility, like after the third Kondratieff cycle when power simply fell into Hitler’s lap (so much for the “coup d’e´tat”). In the 1920s the unions became almost impotent, not because businessmen were suddenly so wicked, but because they themselves hardly made any profit. The great coalition of the Weimar Republic collapsed in the dispute over higher contributions to unemployment insurance; the social-liberal coalition in Bonn was overthrown after the fourth Kondratieff cycle in 1982 on the issue of new debt. And even we are now facing increasing distribution battles, particularly in social security, pensions and health insurance. These distribution battles arise not only within societies, but also between them. In a long upturn there has always been globalisation (old hat), world trade expands and we all love one another. In a long downturn, on the other hand, politics come under pressure from economics, please erect trade barriers and tariffs so that the others cannot take away our market share. In the 1880s corn tariffs isolated the German Reich, in the 1920s world trade almost came to a standstill; in the 1970s technical standards were invented so that the French could no longer sell to us (and vice versa). And today these distribution battles are staged between the major
31
Then with the Gesellschaft f€ ur Mathematik und Datenverarbeitung GMD [Association of Mathematics and Data Processing], St. Augustin, in his book “Der 6. Kondratieff”, Rhein-SiegVerlag, St. Augustin 1996.
30
E. H€andeler
economic blocks: the USA has a sudden protectionist reaction to Europe or China (and vice versa). We ought to save ourselves the resources lost in trade wars – the Kondratieff view forwards allows for alternatives, instead of as previously in each long Kondratieff downturn getting bogged down in structural conservatism with company mergers, high unemployment and the flight to discount wars and mass production (which nobody needs any more). For we are not at the mercy of an oscillating sine curve, as the Kondratieff cycles are diagrammatically represented. If we can manage to identify Kondratieff’s current “real cost limit” which is throttling growth, then there is no need for long and deep crisis. What company structures, what management methods can overcome the next scarcity barrier, what education do we need, what investment has to be made? We can only focus on such questions when the economic sciences accept that economics is first and foremost a cultural achievement. However this contradicts the majority view of economists whose basic assumptions on many points have proved ivory-towered and unrealistic.
Farewell to “Homo Oeconomicus” The economist’s belief system would have it that the market out there is inhabited by rational economic players pursuing their own interest and thereby optimising their benefit – to wit, in accordance with the incentives set by the underlying system. Once these basic assumptions were accepted unquestioningly by “Homo Oeconomicus” it became possible to encapsulate economic activities, i.e. human dealings, in mathematical equations: changes in gross national product when health insurance contributions are increased again (all other influences remaining the same), when businesses enforce longer working hours or the issuing bank cuts interest rates. This mechanistic thought process originated in the eighteenth century: economic science transferred the laws of natural science to economic events since they were so successful with mathematical laws, for example in the case of complex machinery such as clocks and even mostly in the case of rocket flight paths.32 Independently thinking people were therefore turned into dumb physical mass particles who obeyed mathematical formulae and could be understood by simple equations, whose economic movements could be calculated on paper to several decimal places. When market theorists talk of human freedom what they really mean is the freedom of movement of a human pendulum at liberty to follow its natural laws. 32
This follows the arguments of Karl-Heinz Brodbeck: Die fragw€urdigen Grundlagen der ¨ konomie. Eine philosophische Kritik der modernen Wirtschaftswissenschaften. O Wissenschaftliche Buchgesellschaft; Darmstadt (1998).
On the Reality Behind Money
31
The problem is that human actions cannot be equated to apples which fall from the tree in their inability to defy gravity. When two billiard balls ricochet off each other you can calculate the energy released and the continuing direction of movement because they obey the laws of nature. But what would happen if billiard balls could discuss the situation and decide the direction they wanted to take? Everything we undertake or fail to undertake is connected with the goal towards which our actions are directed. Our actions are determined by the idea of what we consider important and desirable in life. Different people consider that different things worth striving for. This means that calculations based on the popular basic assumption that human beings act in their own interest are not worth the paper they are written on, because it is not at all clear, let alone impartially scientifically ascertainable, what actually is in my own interest. When a warder thinks it in his own interest to guard a prisoner, whereas Maximilian Kolbe defines self-interest as going into the “hunger bunker” for another man, it is clear that self-interest is only a subjective value judgement. The same is true of people who smoke 20 cigarettes a day – by contrast with non-smokers. The way in which market liberals define selfinterest is a default value: purely self-involved individualists, calculating in monetary units, rooted purely in this materialistic world are unproven dogmas of faith. There is no rational explanation for what we consider to be our aims in life. Only the actions they inspire can be rational. Reality shows that we make emotional decisions, that our perception of reality is limited and is even clouded by our preconceived notions, wrote Karl-Heinz Brodbeck, the W€ urzburg Economics Professor in a book casting doubt on the “questionable foundations of economics”. What people think is to their benefit depends on the information reaching them. Advertising is responsible for the fact that the hiking trails in the Alps are populated with people in red socks: when colour photography arrived on the scene it was promoted by panoramic colour views. Vivid red is a colour rarely found in nature, so the publicity people had walkers wearing red socks below their knickerbockers. Reality in the mountains has since been geared to its depiction in advertising. On the other hand, there is a strong faction in media impact research for whom the life worlds represented in the media bear hardly any relationship to reality. But if this were the case why does industry spend so much money on television advertising? Whether someone finds something good or beneficial is a question of interpretation. In the 1980s there was a time when grappa, the Italian spirit, was the “in” thing to drink in the smart Munich set. Once the S€ uddeutsche Zeitung had clarified the fact that grappa was a revolting brew made from the waste products of the wine harvest, the big stores cut the price per bottle from 80 to a no less outrageous 50 marks. Demand continued to fall despite the drop in price. Demand is less a function of money, more one of information. And human freedom underlies even the information I do or do not read – a fundamentalist grappa drinker will simply not read a negative article on grappa. Human beings are free – however much they question themselves or follow their current whims or hormone levels. Economics is formed by human behaviour, by creativity, irrationality, clouded perceptions, individual aims in life, ultimately by human freedom – in the final
32
E. H€andeler
analysis it is a function of our chaotic intellectual world, and hence mathematically indescribable. The downfall of the rationality postulate reduces almost the whole of traditional economics to ashes and with it the reason for the existence of economics as a physico-mathematical science which has slept through the change to the knowledge society: calculations are spurious in which labour is increasingly replaced by capital and machines make workers unnecessary. In the information society this is already nonsense, because there are no machines which can replace human thought, i.e. work on fuzzy information and accept responsibility. Productivity in intellectual space will in future follow other patterns of success to be developed in the sixth Kondratieff cycle. It will render obsolete the foundations of economic thought patterns from the mechanistic industrial society: social stability and prosperity will be achieved by the very opposite of the egoistical competition which economists assert is the mainspring of the economy. If it wants to show politics the way to more prosperity, economic science must address itself to the historical and cultural conditions of economic action in a very concrete, unique situation instead of continuing to refine mathematically exact instruments which have a only rudimentary grasp of reality. With Kondratieff’s theory, on the other hand, economic development and structural change can be looked at comprehensively. Economics then becomes a realistic science of real life, of an evolutionary, irrevocable and complex process. The political and legal organisation of a society cannot be gleaned from monetary statistics. It is not enough to have a lot of monetary capital and a theoretical knowledge of steam engines, as in Russia under the tsars. As long as peasants were answerable to their landlords as serfs, they were unable to become more productive by migrating to the cities and becoming workers. To say nothing of religious value judgements and their effect on material prosperity: the pressing need for reform in economics is underlined by the fact that diploma dissertations which in addition to interest rates and gross national product take account of human behaviour, historically unique situations and social conflict (to do with technical changes affecting the economy) are shunted to sociology, psychology and – from an economics professor’s point of view – other esoteric airy-fairy subjects. With its overspecialisation the science of the future ceases to depict the specialisation of the assembly-line worker because it prevents the solving of problems which increasingly refuse to stay within academic subject limits. The whole is never the sum of its parts but their interaction. If economic science fails to incorporate Kondratieff’s global theory it will not be in a position to explain long periods of crisis or to help the real economy get back on its feet. Nothing will help in the long downturn: neither government spending programmes, low interest rates, bullying the unemployed and social welfare recipients, dressing down the central bank nor feigned optimism. So far Kondratieff’s theory has not appeared in mainstream economics, although it is sometimes mentioned in passing – like an exotic, practically extinct animal. Perhaps it features in the odd lecture. But the long cycle theory is not included in existing schools of thought, for it would turn established theories – neoclassicism, monetarism and Keynesianism – upside down in equal measure. According to it the
On the Reality Behind Money
33
strength or weakness of a country’s economic prosperity is determined by the degree to which its inhabitants put into effect the new means – technical, as well as social, institutional and intellectual – of overcoming scarcity. This a different perspective from the classical notion that full employment is levelled out by market price. And harsh reality has long ago dissipated even the Keynesian feasibility idea according to which the economy can be globally controlled by macroeconomic “watering can” variables such as money supply and government expenditure.
Politics of the Real Economy When the West German government under Chancellor Helmut Schmidt reacted to the oil crisis with big government spending programmes, this simply led to inflation in sectors which were overheated in any case, while other sectors made employees redundant and Germans simply saved more – the monetary measures came to nothing because the real economy was stagnating. When Helmut Kohl came to government in 1982 he cut back state spending and social security benefits and increased both social security contributions and taxes on consumption. According to prevailing doctrine this was the fast track to recession. Instead the economy sprang to life – just because in real life the computer was conserving resources, increasing profits and hence making new investment and jobs profitable again. Monetary theories cannot account for or shape the deeper causes of economic development: these are to be found in real working conditions, i.e. in future in the overall state of health – including psychological and social – of the knowledge workers in project work. Instead public debate revolves around issue bank interest rates and money supply or bank bonuses and lending regulations. In times of an expanding technology network like that of the computer in the 1980s it would have been irrelevant for the issue bank to raise interest rates by one quarter of a percent because that would have hardly throttled the business climate – the economic cycle is not causally dependent on the interest rate but on the rhythm of the overall economic trend in productivity (by contrast with a purely operational productivity increase when people who are laid off remain unemployed and have to be taken care of by social security benefits, a boost to productivity leads to more prosperity, particularly if the redundant workers can be employed elsewhere). Monetary policy therefore remains powerless, even if borrowing costs hardly anything: in 2010 the American Federal Reserve is lending money for virtually nothing – its way of expressing the gravity of its concerns for the future. The cost is lower than inflation – in theory investment is now more profitable than before, house buyers and small consumers should have more money to spend. But that is pure theory. Whether anyone invests depends on the real economic circumstances, whether there is anything worth borrowing money for. Monetary policy alone can only endeavour to keep the money supply as suitable as possible for stabilising monetary value; it cannot help the economy to grow. With monetary control issue
34
E. H€andeler
banks can only marginally decelerate or accelerate a new technological system like the motorcar or electrification. They will only ride along their Kondratieff wave, indeed conversely they will tend to be driven by it – but they cannot trigger it. At the end of the dynamic of an outworn invention monetary policy makers are therefore left with nothing: they cannot nominally take interest rates lower than zero. This situation is visualised in neoclassical synthesis (from classical and Keynesian ideas) by the so-called IS-LM model, said to represent the equilibrium of saving, investment, money demand and money supply. Here lower interest rates automatically trigger higher investment. When interest rates approach zero and still nobody invests more, their jargon for this is the “liquidity trap”, which translated means much the same as a shrugged “shame, but there it is”. Again the model is saying roughly that there is no investment because no investment is being made. The phenomena which this science purports to explain – namely why nobody wants to invest and what should be done in the real economy to change the situation – are excluded as indeterminate nebulous external factors. The methods long ago took on a life of their own and come between econometricians and reality. In their forecasts they have to exclude so many factors, make so many assumptions and define so many facts that it seems ridiculous to attempt to work out any mathematically exact data. Many economists get out of this within the neoclassical synthesis by conducting costly research into totally irrelevant specialist issues, the social utility of which is roughly equivalent to attempts to further refine the understanding of ancient Egyptian dialects. It is no fun being a politician in a long Kondratieff downturn. Mounting problems have to be solved with dwindling resources, you get bogged down in more or less unsuccessful distribution battles, or in the end abandon all responsibility like the democratic parties at the end of the Weimar Republic. In order to shake off the justified feeling of economic uncertainty during persistent stagnation it is essential to visit the past. Let us suppose, therefore, that we are a group of economists travelling back in time to visit the Habsburg Prince von Metternich in 1830, Reich Chancellor Otto von Bismarck in 1880, Reich Chancellor Br€uning in 1930 and Helmut Schmidt in 1980. All four have seen better times but are now struggling with unemployment, scarce resources and distribution wrangles. Let us review the different schools of thought to see whether and how effectively their economic policy prescriptions help, and let us then head for the present-day Federal Chancellery in Berlin.
What the Next Election Campaign Should Be About Let us suppose that in 1830 Prince von Metternich let himself be persuaded by Keynesian thinkers to get into debt and increase government spending indiscriminately across all sectors and classes to jump-start the economy as a result of the demand created. After a brief flash in the pan during which the many peasants repaid their debts, put up monuments to Metternich and visits to the circus boomed,
On the Reality Behind Money
35
the debt-driven economy would collapse again. But if he had listened to a Kondratieff economist he would have been told that his economy was stagnating because transport was so expensive. Metternich should intervene selectively and build railways; if farmers and businessmen could sell their goods more cheaply over a greater radius the economy would pick up; greater unit numbers would also become worth-while, bringing prices down. It mattered not a whit whether he financed the new technology network with debt, surplus private capital (shares, bonds) or raised taxes and cut consumption (pensions, personal luxuries, superfluous government spending); whether the economy started moving depended only on whether and to what extent he took care of the next structural cycle, in other words built railways and supported associated education, infrastructure, thought patterns and market-ready end products. This means that the same old debate of German Bundestag election campaigns as to whether or not we should cut taxes and take on debt is totally irrelevant. It is all the same whether we finance infrastructure and training for the sixth Kondratieff cycle with private capital which citizens save thanks to lower taxes, or whether we fund it with government debt or higher taxes. The only thing which counts is whether and how much a society invests in the next structural cycle. The lower taxes always demanded by the liberals, on the other hand, are only rarely suitable for building up a new technology network. The theory behind this is that anyone spending other people’s money on things for other people finds it easy to behave like a spendthrift, money is most effectively spent when you spend your own money on things of importance to you. Total utility is therefore greater when the public spends its own money. Yet it is more likely that the well-to-do would prefer to invest in a second car, a holiday home in the south of France or a yacht in Kiel (which does not make the economy more effective in terms of Kondratieff’s theory) than in biotech research, the quality of education or social structures. Rich people only can afford a poor state. The fact that the USA has a much lower rate of tax than Germany is a dubious example: in the USA citizens are going for higher taxes on the roads because they are fed up of driving on wrecked roads and having to send their children to poor schools. Tax reduction is nothing more a temporary redistribution to subsequent years’ costs. Politics on tick is always immoral, save when the borrowed money is profitably invested in the medium-term future. The neo-classicist in our group therefore proposed creating incentives for more investment. The German chancellor Helmut Schmidt of 1980 was familiar with this from his government’s vain appeals to businessmen to please invest more. The only trouble is that if there is nothing worth spending money on and investing in, nobody will borrow money and invest, even if the money is to be had for nothing. When there is sufficient capacity, no-one needs to set up another steel works or car factory – investment per se does not stimulate the economy. In 1980 the proponent of Kondratieff’s theory argued that it was the type of investment, not the amount of money, which destroyed or created jobs: if government wanted to encourage investment it had to help open up the new structural cycle around the computer by training, infrastructure and its own good example. Anyone changing tax laws or
36
E. H€andeler
underlying conditions for investment per se is only redistributing between state and business profits but is not creating a higher level of productivity. A couple of pieces of monetary advice would be inevitable in this heavyweight round: the heads of government should see to increasing the money supply. No good, warns the Kondratieff economist, money is only a transmission belt between different sectors which become productive at different rates and utilise capital with different degrees of efficiency. More money would not therefore make the economy more productive, only Br€ uning overdid it somewhat with the money squeeze of 1930, so that prices failed to level off quickly enough. One of the classicists among the economists would advise German Reich Chancellor Br€uning to devalue the Reichsmark in 1930 so that he could export more. The Kondratieff expert would counter that the effects would only be shortterm due to the international reaction; in the long term the only successful exporting country is the one to make better use than others of the motorcar, the upcoming basic innovation. This would probably then also attract capital from outside because there are more worthwhile things to invest in. His foreign currency would therefore increase instead of falling – as wanted – and he would still become a world export champion (like Germany in the 1950s and 1960s). Those who were politically socialised under the zeitgeist conditions of the 1970s would advise Messrs Metternich, Bismarck, Br€ uning and Schmid to increase wages dramatically in order to kick-start the economy. This is an economic measure which we should investigate more precisely from the perspective of Kondratieff theory as an example of many other established economic schools of thought.
Effect? That Depends! In the 1999 pay round the then German Finance Minister Oskar Lafontaine (SPD) spurred the unions on to high wage increases – in his belief system the employed would then have more money in their pockets, would spend more, so the economy would grow. From the perspective of Kondratieff theory enforced higher wages can have several possible effects, depending: initially production costs increase. The entrepreneur’s profit melts away. Then it depends on his and the market’s reaction: he passes the higher costs on to customers in higher prices. At a time of expansion like the early 1960s customers would not mind, they would buy the product at a higher price – after all their prosperity has also increased in real terms thanks to better production methods. In this case, with the higher wage rates they went on strike for, the employed would have mopped up the additional income that other sectors had created through productivity increases. In this way professions whose productivity probably does not change much, for instance taxi drivers and teachers, also profit from the rising productivity of other sectors (a primary school teacher earns more in real terms in 2013 than in 1973, but pupils learn to read and write just as well as they did 40 years before).
On the Reality Behind Money
37
Things are different in a long downturn, when society stops becoming more productive: then products suddenly cost more and fewer are bought. Entrepreneurs not only have higher costs, their turnover falls as well. So they will discontinue unprofitable production and make workers redundant, or leave the price as it is because the market will not tolerate a higher price. They then forego profit more or less voluntarily, have less money for investment in the long term, take longer to pay back what they owe, scale back production. The outcome is the same: fewer employees. Another possibility is that higher wages cause them to invest in labour-saving machinery to boost productivity. This was the case at an SPD party conference in the 1970s when high wage demands were wanted to test the efficiency of the German economy. The outcome depends. In a booming market like the car industry in the 1960s the use of more machinery did not lead to unemployment, but to the ability to meet the desire for individual mobility more quickly. But when a need is satisfied, or when in a long downturn more and more people share a car and the existing cars run until they are beyond repair and have to be taken off the road, then using more machinery only results in fewer employees being needed and some being made redundant. Does this lead to mass unemployment in a national economy? It depends. In the 1950s and 1960s technical improvements to coal mining meant that fewer and fewer miners were needed. This was not a problem providing they could be employed in booming sectors like road building and the car industry. Once the fourth structural cycle had run down there was nobody to take them on – if a miner was unemployed he was very likely to remain so, and when the fifth Kondratieff upturn needed a new type of specialist workforce they were difficult to retrain. Then there is also the possibility of wages and prices rising but no more being produced in the economy as a whole. The only consequence is that money is worth less and people buy the same thing at higher prices, as was the case in the stagflation of the 1970s. In brief, no conclusive relationship can be construed between the implementation of higher wages and a welfare gain for society, as Lafontaine and others thought they believed. Whether this increases demand, as the Keynesians assert, or damages the economy because prices rise, as supply-oriented economists see it, depends on the phase of the relevant Kondratieff cycle. If one therefore cannot be on the safe side with either of the two schools of economic thought, it is equally ineffective to try both at the same time. Kondratieff’s theory ends the decades-long economic debate as to whether policy should be supply-oriented or demand-oriented: it is above such things, for a basic innovation has a positive impact on both sides. Anyone implementing the new technology together with organisational patterns and behaviours can produce more cheaply and produce more – the adherents of classical theory who explain growth by capital accumulation and lower prices then feel themselves validated. On the other hand, basic innovation also stimulates the demand side: if the improved product is cheaper, I have more money in my wallet to buy more of the same or spend the saved money elsewhere. And if the sector which can produce things better and more cheaply thanks to innovation also employs more
38
E. H€andeler
people, this likewise produces new income which increases the level of demand in the national economy as a whole. But both supply-oriented and demand-oriented theories bypass the causes of further economic development and well and truly deserve to be mothballed.
The New Rules of Success in the Knowledge Society So in which sector is the next scarcity to be overcome in the work process, will it be uphill all the way? Many think energy and raw materials. Yet the less oil is available, for example, the more profitable renewable energy sources become – coal from sewage sludge, solar power installations in North Africa, improved energy efficiency, higher performance, from solar cells for instance. This will balance out the losses incurred by more expensive oil and gas, but is not really a higher phase of prosperity, only the same in another guise. Those allowed to consume energy will ultimately be decided on the market by those who use it most efficiently – and that depends on the quality of knowledge work: analysing a situation in order to make the right decision; in the deluge of information promptly finding the knowledge which someone needs to solve a problem; understanding what the customer actually thinks. But there will not be any new steam engines to make our ideas more productive. The only scarcity will be educated people and their problem-solving added value, their health and productive working life, their ability to work together (Fig. 3). This is the first time we are facing an intangible scarcity barrier in an increasingly intangible economy. A lot of indicators such as mental resignation and communications problems testify to the fact that information services are not efficient enough – those in work are under pressure to change their social behaviour in particular, to cooperate more efficiently, to make better use of knowledge.
Fig. 3 Schematic diagram of Kondratieff waves projected into the future. From: Erik H€andeler, Die Geschichte der Zukunft
On the Reality Behind Money
39
And because education is becoming an expensive, decades-long investment it also has to be written off over a longer period – the demand for maintaining good health is becoming so strong that it can support an upswing. Yet from an economics point of view current health policy is only once again distributing money from one pocket to the other. If politics were to discover Kondratieff’s global perspective, in real life it would concern itself with a better work culture and keeping even the healthy in good health. Over 70 years after his death Kondratieff’s theory would then have proved something: that in the long term ideas are mightier than bayonets and repression.
Literature Baaske WE (2002) Aufbruch zum Leben – Wirtschaft, Mensch und Sinn im 21. Jahrhundert. Universit€atsverlag Rudolf Trauner, Linz Bairoch P (1982) International industrialization levels from 1750 to 1980. J Eur Econ Hist 11:292–304 Berry BJL (1991) Long-wave rhythms in economic development and political behavior. Johns Hopkins University Press, Baltimore, MD Berry BJL (1997) Long waves and geography in the 21st century. Futures 29(4/5):301–310 ¨ konomie Eine philosophische Kritik der Brodbeck K-H (1998) Die fragw€ urdigen Grundlagen der O modernen Wirtschaftswissenschaften. Wissenschaftliche Buchgesellschaft, Darmstadt Durant W, Durant A (1985) The story of civilization, German: Kulturgeschichte der Menschheit, 16th edn. Sonderausgabe Naumann & G€ obel, Cologne Freeman C, Louca F (2001) As time goes by from the industrial revolutions to the information revolution. Oxford University Press, New York Grosser D (ed) (1990) Soziale Marktwirtschaft – Geschichte, Konzept, Leistung. Kohlhammer, Berlin H€andeler E (1996) Deutschland im f€ unften Kondratieff? Eine €okonomische Theorie als Herausforderung f€ ur Wirtschaftswissenschaft und Politik. Master’s dissertation in the Political Science Faculty of Ludwig-Maximilian-Universit€at, Munich H€andeler E (2009a) Die Geschichte der Zukunft – Sozialverhalten heute und der Wohlstand von morgen (Kondratieffs Globalsicht), 7th edn. Brendow-Verlag, Moers, p 479 H€andeler E (2009b) Kondratieffs Welt. Wohlstand nach der Industriegesellschaft, 4th edn. Brendow-Verlag, Moers, p 128 ¨ konomischer Wandel und milit€arischer Kennedy P (2000) Aufstieg und Fall der großen M€achte. O Konflikt von 1500–2000. Fischer Taschenbuch Verlag, Frankfurt Kondratieff ND (1926) Die langen Wellen der Konjunktur. Archiv f€ur Sozialwissenschaft und Sozialpolitik 56:573–609 Kondratieff ND (1928) Die Preisdynamik der industriellen und landwirtschaftlichen Waren (Zum Problem der relativen Dynamik und Konjunktur). Archiv f€ur Sozialwissenschaft und Sozialpolitik 60:1–85 Makasheva N, Samuels WJ, Barnett V (1998) The works of Nikolai D. Kondratiev”, two volumes. Pickering & Chatto, London, p 650 Nefiodow LA (1996) Der Sechste Kondratieff – Wege zur Produktivit€at und Vollbesch€aftigung im Zeitalter der Information. Rein-Sieg-Verlag, St. Augustin ¨ konomen. Hamburg Maier H (1993) Wellen des Fortschritts. In: Zeit-Punkte 3/1993, Zeit der O Marchetti C (1983) The automobile in a system context. The past 80 years and the next 20 years. Technol Forecast Soc Change 23:3–23 Modis T (1994) Die Berechenbarkeit der Zukunft. Warum wir Vorhersagen machen k€onnen. Birkh€auser, Basel
40
E. H€andeler
Perez C (1983) Structural change and assimilation of new technologies in the economic and social systems. Futures 15:357–375 Schumpeter JA (1939) Business cycles, two volumes. McGraw-Hill, New York, German translation: Konjunkturzyklen, G€ ottingen 1961 Schumpeter JA (1964) Theorie der wirtschaftlichen Entwicklung, 6th edn. Duncker & Humblot, Berlin
Merit Sectors Stefan Mann
Introduction This chapter makes the point that the sectors of an economy also matter in a normative way. To describe the output of an economy appropriately (or so the argument goes) it is not sufficient to simply know the GDP per head, but it is necessary to acquire some familiarity with the composition of the sector within the economy. The frontier between sectors and markets is not very well defined, but if the selection of goods in a system is as diverse as it is in food or pharmaceuticals, for example, it is probably more useful to opt for a definition of food and pharmaceuticals that relates to a sector rather than a market. It will be evident that many normative approaches and considerations are based on such sectors rather than on single markets, for which a microeconomic perspective is appropriate. Section “Differences Between Sectors” paves the way for the argument of the normative relevance of sectors by introducing some dimensional differences between sectors. Sector 3 reviews the tradition of paternalistic arguments within market economies and applies them to sectors. Sector 4 incorporates a review of existing policy approaches aimed at responding to the different ways of generating utility between sectors. Sector 5 discusses normative arguments relating to the contribution made by the various sectors to general utility and potentially useful sector categories. Sector 6 concludes.
S. Mann (*) Agroscope, Ettenhausen, Switzerland e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_3, # Springer-Verlag Berlin Heidelberg 2011
41
42
S. Mann
Differences Between Sectors Different sectors play different roles at different times of history. The role of the food sector, for example, has historically been one of covering our basic need to eat. Innovation in this sector, however, will have only a moderate effect on our lives. There may be other products which positively affect our health, a further array of thrilling tastes may be poised to burst upon us and there may well be an increase in the convenience factor when it comes to preparing meals. However, these innovations contribute little to the way in which our needs are covered. Innovations in the IT sector during the past 30 years, however, have made a significant contribution towards changing our lives. The way in which students conduct research, how we exchange information or how we plan our travel activities bears hardly any comparison with the way our grandparents were used to carrying out these tasks. The way we go about preparing apple cakes on Sundays is in all probability much closer to the practice of our grandparents, even if the proportion of people who buy their apple cake at the store has risen. It is unlikely that the increased pace and impact of innovation in IT compared to the food industry is coincidental or due to the higher qualifications of IT executives. It is more likely that the basic role currently differs from one sector to the other. It would appear that the basic role of the food industry during past decades has been to ensure that needs are met, whereas the basic role of the IT sector has been to revolutionize the way we go about gathering information and communicating with one another. To put this example on a more generalized basis, what this means is that some sectors in society mainly serve the purpose of maintaining the fulfilment of human needs, while the purpose of other sectors is, in the main, to change how needs are met. However, assuming that this is the case, it will be obvious that the effect of investing in the economy cannot be independent of the sector in which it takes place. Nevertheless, this is what most macroeconomics models assume. The same applies to the growth potential of investments which will obviously be larger in new sectors. A second point relates to the money that flows into specific sectors. What happens to this money in individual instances may differ considerably – and may have a major impact on economic development. In many service sectors, revenues are mostly used as wages, i.e. the incoming money is re-used for purposes of consumption. However, in certain industrial sectors, as well as in agriculture, a major proportion of revenue has to be used for buying machinery or in conjunction with other production factors. It is plausible to assume that a large share of these production factors will be long-term investments, whereas private consumption is not. The implication here is that the sectoral structure of an economy will have a significant effect on how the relevant proportion of GDP is invested. In addition to this macroeconomic dimension, there is also a spatial dimension. Using an Alpine mountain valley as an illustrative example, Buser (2005) shows that the regional multiplier of different sectors varies considerably. In some sectors,
Merit Sectors
43
operators tend to spend their money locally whilst in others people buy from more distant sources or even internationally. Therefore, the sectoral structure of an economy will also determine the spatial face of the network through which added value is generated. This is particularly important in cases where support is needed for economically weak regions. If we extend the basis of our understanding of utility as a purely economic term to encompass a more holistic perception of well-being, a third point will emerge. While arms production may contribute as much to GDP as the entertainment industry, it is obvious that the contribution of arms production will, at best, be to convey some feeling of security, but in other cases it will simply contribute to destruction and misery. The effect of the entertainment industry on well-being will, at least at a first glance, be much less ambiguous. Arms production may constitute a highly illustrative example of a sector that contributes more to GDP than to well-being, but the analogy is one that may be applied to many more sectors. Imagine an economy that spends 90% of its resources in the health sector. As the health sector, by definition, only works curatively, this begs the question of what is happening in this economy if there are so many illnesses needing to be cured? Even the much-vaunted environmental sector may give rise to this suspicion: why is there so much environmental damage to repair and how does this contribute to societal development? The purpose of all these considerations is to open the reader’s mind to the notion that different sectors contribute to societal utility in what are basically different ways. If it is the government’s objective to maximize societal utility; it may be well advised to treat sectors in different ways. The following section adopts a normative approach to developing this differentiation between sectors.
The Tradition of Paternalistic Thinking in the Market Economy It is probably one of the major achievements of capitalism that it considers every individual as a responsible operator in his own right. The emphasis on equal rights and self-responsibility is one of the major advancements that grew out of feudalistic thinking. Thus, as Mill puts it, “the burden of proof is supposed to be with those who are against liberty; who contend for any restriction or prohibition. . . The a priori assumption is in favour of freedom. . .” (Mill 1963). In contemporary political philosophy this is called the Fundamental Liberal Principle. Because freedom is normatively basic, those who seek to restrict it, by whatever means, must provide sufficient justification for their actions. Since political authority and law restrict the freedom of citizens, it follows that these must be justified as well. So, in liberal political theory, the central question is whether political authority can be justified or not and, if so, how this can be done. Since the major works of capitalist thinkers like Adam Smith, it has taken almost 200 years for the total hegemony of individual preferences to be systematically challenged by the concept of merit goods.
44
S. Mann
The concept of “merit goods” was developed at around the same time as Samuelson developed his concept of public goods (1954, 1955). This indicates that the 1950s may be viewed, in general, as a time where the introduction of fundamentally new concepts had a good chance of succeeding. It is possible that Musgrave, by defining a new category of goods, planned to copy the success which Samuelson’s new definition had achieved. Merit goods have been defined as goods where “policy aims at interference with individual preferences” (Musgrave 1957; 341). This means that merit goods are provided by the state to a larger extent than the degree to which it matches aggregated consumer demand, for example, safety belts, where the state forces car drivers to use them. In justifying the legitimisation of merit goods, Musgrave (1957; 341) argued as follows: “The apparent willingness of the public to provide for a second car and a third icebox prior to ensuring adequate education for their children is a case in point.” A little later on, the category of demerit goods was added by way of a necessary complement. The latter are goods which the state taxes or prohibits in order to reduce consumption, such as drugs. The generation of the term “merit goods” has been driven to a much greater extent by economic practice than by economic theory: It appears that Musgrave developed his limitational approach to consumer sovereignty mainly by observing the phenomenon of a public supply of goods that could not be traced back to consumer sovereignty, e.g. schooling or housing for the poor. This may be the reason why few concrete measures for the normative determination of merit goods were developed by Musgrave himself, something which prompts McLure (1968) to call merit wants a “normatively empty box.” From a mesoeconomics perspective, it is noteworthy that the few examples which Musgrave provided to illustrate his framework tend to be sector- rather than product-related. From the citations above, it is clear that Musgrave’s intention was not to show the objective superiority of icebox A against icebox B, but to show the superiority of investments in the education sector against investments in kitchenware. Musgrave’s initiative was initially characterized by a predominantly negative reception (Baumol 1962; Mackscheidt 1974; Solf 1993; McLure 1968; Tietzel and M€uller 1998). Baumol (1962), for example, stated: “I want badly to be protected from those who are convinced that they know better than I do what is really good for me, and I want others to receive similar protection.” The more contemporary attempts to develop the “merit goods” concept were ignored rather than rebuffed. For some time it seemed that the idea that individual preferences are the only normative base of welfare economics would be sacrosanct in mainstream economics. This changed in 2003 when the economist Richard H. Thaler and the lawyer Cass R. Sunstein published a paper in which they introduced the concept of libertarian paternalism, followed by a book (“Nudge”) in 2008 in which they presented a broader-based version of their concept. Arguing that utility can be increased by channelling people’s choices in one particular direction, they bade farewell to the almost definitional notion of textbook economics that individual
Merit Sectors
45
preferences are the only guide for measuring utility and therefore the only normative scale on which decisions stand to be measured. Despite some critics (Sugden 2008), these contributions have mostly met with a very positive reception within mainstream economics, more so than externally, and Google Scholar reports that the book “Nudge” was cited in scientific publications more than 200 times between 2008 and 2009 – an astonishing number, given the lengthy periods of time required for reviewing and preparing a paper for submission. Although the argumentation to justify governmental intervention was considerably improved by Sunstein and Thaler, the fact remains that the illustrative examples in their publications are still mostly related to sectors rather than to products. The authors make the claim, for example, that Americans consider themselves under-invested when it comes to pension funds and should therefore be nudged towards increasing their investment. Again, they do not suggest that the government should recommend a particular pension fund, but see that the financial sector should receive support from the public. This means that major frameworks to justify governmental intervention in private transactions are more closely related to sectors than to single markets. Any normative study of paternalistic interventions would therefore benefit from a mesoeconomics perspective compared to the more frequent microeconomic perspective.
Paternalistic Instruments in Practical Policy Within market economies, not many governments strictly follow the rationale of neoliberal economic policy. In most cases, the administration and policy-makers steer their citizens in one way or the other. The most important instruments for doing so are by providing information, taxes and prohibitions. It will be shown that these instruments are often applied to sectors rather than to single products.
Provision of Information In spite of the neoclassical paradigm that citizens have full access to information about anything they wish, many governments actively inform their citizens about various issues. One of the issues most frequently addressed is various ways to protect oneself from HIV/AIDS. In this case, a great deal of attention is devoted to a single market (condoms) rather than to a sector. However, such campaigns often give behavioural advice to citizens, so that the information is not restricted to the framework of the market for condoms that a microeconomic view would suggest. A more general perspective reveals the mesoeconomic relevance of information campaigns of this nature. In addition to HIV/AIDS, there have been public
46
S. Mann
information campaigns on SARS (Basrur et al. 2004), influenza (Aaby et al. 2006) and cholera (Lasch et al. 1984). This phenomenon indicates that governments consider their citizens as suboptimally informed about certain health aspects of their lives. This is not the case for many other sectors like the steel industry or computer services. Hence, this type of government intervention demonstrates a clearly sectoral motivation. This pattern also applies when public information is not about individual diseases, but about general ways to live a healthy life. This applies to campaigns to combat smoking and drinking, to a balanced diet (for parallels see Mercer et al. 2003) and in some cases also to driving too fast (de Waard and Rooijers 1994) and other accident-related issues. In any event, many governments are convinced that their citizens underinvest in information about relevant factors in the health sector. Most, but not all public information campaigns relate to the health sector. The Colombian government, for example, launched a campaign against brand piracy in their capital Bogota, i.e. a campaign aimed at generating increased respect for intellectual property. Other governments provide information about visa regulations. In any case, these information campaigns tend to be sector- rather than market-related.
Taxation In a number of countries, taxation is used to steer economic behaviour. One of the main instruments is Value Added Tax (VAT) which is not uniform in all countries across the whole range of goods and services. Reduced rates are often applied to the tourism sector (M€uller and Zaugg 2005) and food (Aldermann and del Ninno 1999), albeit for different reasons. Whilst reduced taxes in the tourism sector are aimed at increasing the inflow of cash from other countries, the reduction in tax on food follows the perception that food represents our most basic need and should thus benefit from reduced prices. Therefore both measures are sector-related, but the first pursues an economic rationale while the other can be classified under paternalistic motivations, coupled with a social motivation. If social policy were the only motivation for the reduction in tax on food, the VAT revenue foregone would be better collected (including from the rich who currently also benefit from VAT exemptions) and then handed directly to the needy. The tool of a reduction in VAT shows that the government acknowledges that under-expenditure on food is among its target groups. These are clearly sectoral considerations. Extra taxes are imposed on goods such as petroleum, cigarettes and alcohol. For some of those taxes, the internalisation of external effects can readily be cited as motivation. However, alcohol consumption has few direct external effects, so the widespread extra tax on alcoholic beverages has certainly a paternalistic element in it. The externalities of alcohol consumption are of a rather indirect nature: Chisholm et al. (2004) and Cnossen (2007) therefore have a point in arguing that alcohol taxes are determined to offset the cost to society of alcoholism. Alcoholism
Merit Sectors
47
causes, for example, the inability to contribute to GDP. In public finance theory, however, such indirect effects are rarely considered as externalities. In any case, there is a market for alcoholic beverages rather than an alcohol sector. Therefore, some tax peculiarities remain for which microeconomic perspectives are better suited than mesoeconomics approaches.
Prohibitions There may be instances where governments regard the fact that their citizens consume certain goods and services as beyond the bounds of acceptability. Their response in such instances may be to impose a legal prohibition on these products. In the case of any prohibition, there is a rather complex interplay between a microeconomic, product-oriented and a mesoeconomics, sector-oriented view. Many prohibitions are directly related to single products. In fact, no government freely allows the consumption of heroin, few have legalized cannabis and several Muslim governments ban alcohol. However, although these single products are in the focus of the legislators, all the examples are related to the drugs sector. Thus, prohibitions in the drug sector are far more probable than in, say, the tourism sector. Indeed, the topography of public prohibition looks highly sectoral. Apart from drugs, few prohibitions exist nowadays in respect of consumer goods. However, there are still some other economic sectors where prohibition plays a major role. One example is the use of land for building purposes, where many advanced economies allow building activities only in special, designated zones. Other examples include animal housing (having to comply with animal welfare standards), polluting production technologies or genetically modified crops.
Sectors and Utility For several decades now, economists have considered Gross Domestic Product as the most reliable indicator for our aggregated utility. Assuming this concept is justified, sectors will hardly matter. According to the rationale of the GDP, one dollar spent on anti-wrinkle cream will have the same effect on aggregated welfare as one dollar spent on vaccination against polio. Of course, everybody knows that such an estimation of utility is, at best, a very rough proxy of reality. The core question is whether any better scales are available for estimating the development of a society in a normative way. Recently, happiness/subjective well-being have been put forward as a reliable alternative to traditional macroeconomic indicators. Whilst GDP and happiness are mildly correlated (Hagerty and Veenhoven 2003), some scholars consider aggregated, self-reported happiness a better proxy of utility than GDP (Hajiran 2006; Ng 2008). In addition, when it comes to comparing interpersonal levels of
48
S. Mann
utility, the prospects appear better using happiness indicators than by deploying mainstream macroeconomics (Mann 2007). The latter is a crucial condition for a normative approach to social policy measures. Any attempt to estimate the effect of different ranges of structures within an economy on aggregated levels of happiness is likely to benefit far more from the adoption of a sectoral perspective rather than a microeconomic approach. Comparing the contribution made by Coke and Pepsi to our well-being (a typical microeconomics question) would not appear to be a promising exercise. The best way to set about resolving this issue is by comparing willingness to pay (WTP) in respect of the two beverages and it may be postulated that, if the aggregated WTP for Coke is a little higher than for Pepsi, its contribution to utility will also be somewhat higher. It would be difficult to show that Pepsi is making people happier than Coke. An analogy for this microeconomic comparison cannot be easily made at the sectoral level. Consider an economy in which the real estate sector occupies a similar share of GDP to the food sector. It may be assumed, however, that the revenue in the real estate sector is likely to have only a very limited causal relationship with well-being. The revenue of the real estate sector only reflects transfers between proprietors, whereas the positive utility comes from the effect of living in a particular building on a certain piece of land. Food consumption, on the other hand, will have a closer relationship to utility, even if blurred by rising obesity rates where consumption and utility may be negatively correlated. Figure 1 summarizes various sector categories as a possible basis for estimating their contribution to utility. For purposes of future research it would be worthwhile in each case to probe the systematic correlations between economic activities and societal happiness. Real estate and used cars are important examples where economic turnover does not match estimators for consumption. However, there may
Sector
Revenue proxy for consumption Curative consumption
Revenue proxy for destruction
Revenue proxy for transaction
Productive consumption
Refined physical
Refined spiritual
Basic physical
Basic spiritual
Fig. 1 Link between economic activities and happiness: classification categories
Merit Sectors
49
still be a positive correlation between turnover in these sectors and utility because the items typically change if there is somebody with a greater willingness to pay for them. The same conjecture – i.e. the assumption that there is merely a mildly positive correlation between revenue and utility – might be applied to the military sector. Two arguments could be constructed (although I believe neither of them): one is that, in times of peace, more weapons may produce a greater feeling of security and therefore a higher utility. The other is that, in times of war, more weapons may increase the chance to win and survive, again a condition for utility. However, the clearest causal link between utility and economic activities will be identified in sectors dedicated to consumption-facilitating activities. But even in these cases, there may be many different preconditions for the generation of utility by economic means. Curative sectors like the health or the environmental sector will generate a considerable amount of utility – but in many cases only because other economic activities have created the need to do so. Curative economic activities create utility under circumstances where other economic activities have generated negative externalities. There are further dimensions of consumption goods and services that influence the degree of utility derived from them. Maslow (1970), for example, noted that a hierarchy of needs exists. Studies relating to the connection between consumption and happiness indicate that fulfilling basic needs may generate more utility than fulfilling more refined needs. Another important question is whether the generation of utility is dependent on the physical or immaterial character of a good. Does a religious ceremony for which I pay through my church taxes create utility in the same way as buying a necklace? Mann (2008) argues that some commercial activities in the service sector may actually create less utility than their substitutes if these are free services among friends. All these are sectoral issues that potentially contribute to the link between economic activities and utility.
Conclusions Economics matter because economic activities generate utility and economics would not occupy the central position in relation to the social sciences that it has today if this utility were not central to our life. The observation that different economic activities generate different quantities and qualities of utility has led to theoretical concepts on the part of some economists and to different levels of intervention by governments in practice. A great deal of research needs to be carried out if the link between utility (particularly in form of human, subjective well-being) and the type of economic activity is to be specified. The most appropriate level for questions of this sort is the mesoeconomic perspective. The range of structures under which economic activities are played out can be effectively addressed by defining sectors. And this range of structures has a major influence on the issue of meritocracy of economic activities.
50
S. Mann
The important question is why policy makers have, through imposing sectoral taxes/prohibitions or by granting sectoral subsidies, focussed on sectors to a far more visible extent that the degree to which economists have taken mesoeconomics seriously, particularly from a normative viewpoint. The most important reason is probably the desire for abstraction that many economists have. The added value of macroeconomic analysis if every dollar generated can be counted the same is so considerable that one is certainly tempted to neglect obvious differences between sectors. It would help, however, in the interests of keeping economists in the societal discourse, if they were to adopt a systematic and intensive approach to the issue of inter-sectoral differences.
Bibliography Aaby K, Abbey RL, Herrmann JW, Treadwell M, Jordan CS, Wood K (2006) Embracing computer modeling to address pandemic influenza in the 21st century. J Public Health Manage Pract 12(4):365–372 Aldermann H, del Ninno C (1999) Poverty issues for zero rating VAT in South Africa. J Afr Econ 8(2):182–208 Basrur SV, Yaffe B, Henry B (2004) SARS: a local public health perspective. Revue Canadienne de Sante´ Publique 95(1):22–24 Baumol W (1962) The doctrine of consumer sovereignty – discussion. Am Econ Rev 52:289 Buser B (2005) Regionale Wirtschaftskreisl€aufe und regionale Wachstumspolitik. Shaker, Aachen Chisholm D, Rehm J, van Ommeren M, Monteiro M (2004) Reducing the global burden of hazardous alcohol use: a comparative cost-effectiveness analysis. J Stud Alcohol 65(6):782–793 Cnossen S (2007) Alcohol taxation and regulation in the European Union. Int Tax Public Finance 14(6):699–732 De Waard D, Rooijers T (1994) An experimental study to evaluate the effectiveness of different methods and intensities of law enforcement on driving speed on motorways. Accid Anal Prev 26(6):751–765 Hagerty MR, Veenhoven R (2003) Wealth and happiness revisited – growing national income does go with greater happiness. Soc Indic Res 64(1):1–27 Hajiran H (2006) Toward a quality of life theory; net domestic product of happiness. Soc Indic Res 75(1):31–43 Lasch EE, Abed Y, Marcus O, Shbeir M, El Alem A, Ali Hassan N (1984) Cholera in Gaza in 1981: epidemiological characteristics of an outbreak. Trans R Soc Trop Med Hyg 78(4):554–557 Mackscheidt K (1974) Meritorische G€ uter: Musgraves Idee und deren Konsequenzen. Das Wirtschaftsstudium 3:273 Mann S (2007) Comparing interpersonal comparisons in utility theory and happiness research. Forum Soc Econ 36(1):29–42 Mann S (2008) From friendly turns towards trade – on the interplay between cooperation and markets. Int J Soc Econ 35(5/6):326–337 Maslow AH (1970) Motivation and personality. Harper and Row, New York McLure CE (1968) Merit wants: a normatively empty box. Finanzarchiv 27(3):474–483 Mercer SL, Green LW, Rosenthal AC, Husten CG, Khan LK, Dietz WH (2003) Possible lessons from the tobacco experience for obesity control. Am J Clin Nutr 77(4):1073–1082 Mill JS (1963). In: Robson JM (ed) Collected works of John Stuart Mill. University of Toronto Press, Toronto, Volume 21
Merit Sectors
51
M€ uller H, Zaugg B (2005) Lobbying in Swiss tourism. Tourism Rev 60(4):6–11 Musgrave RA (1957) A multiple theory of budget determination. Finanzarchiv, NF 17:341 Ng Y-K (2008) Environmentally responsible happy nation index: towards an internationally acceptable national success indicator. Soc Indic Res 85(3):425–446 Solf G (1993) Theatersubventionierung – M€ oglichkeiten einer Legitimation aus wirtschaftstheoretischer Sicht. Bergisch Gladbach J.Eul Sugden R (2008) Why incoherent preferences do not justify paternalism. Constitutional political Economy 19(6):226–248 Thaler RH, Sunstein CR (2003) Libertarian paternalism. Am Econ Rev 93(2):175–179 Thaler RH, Sunstein CR (2008) Nudge – improving decisions about health, wealth and happiness. Yale University Press, Chicago Tietzel M, M€uller C (1998) Noch mehr zur Meritorik. Zeitschrift f€ur Wirtschafts- und Sozialwissenschaften 118:87–127
.
Part II
Sectors Matter for Development
.
Economic Growth Through the Emergence of New Sectors Andreas Pyka and Pier Paolo Saviotti
Introduction For many countries economic development has created an enormous amount of wealth increasing the welfare of most members of their populations. In many growth models the representation of economic development underlying formal modelling exercises assumed that technical progress would in the course of time increase the productivity of all existing processes, thus leading to the possibility of a greater output per unit of resources employed in production. Economic growth would arise because more final goods would thus be available for each member of the population. Even the most casual observation tells us that this representation does not correspond to what happened in one important respect. Both the types of output and the activities used to produce them are qualitatively different from those that were previously used in the economy. We can thus say that the composition of the economic system changed during economic development. Here by composition we mean the list of all objects (products or services), activities and actors (individual and institutional) that are required to describe the economic system at a given time. Furthermore, qualitative change necessarily entails structural change, although the two phenomena are not identical. Structural change only results from the emergence of new sectors, from the extinction of old ones, and from the changing weights of surviving sectors. Qualitative change can occur at lower levels of aggregation, for example in the internal composition of a sector or even in that
A. Pyka (*) University of Hohenheim, Stuttgart, Germany e-mail:
[email protected] P.P. Saviotti INRA GAEL, Universite´ Pierre Mende`s-France, BP 47, 38040 Grenoble, France S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_4, # Springer-Verlag Berlin Heidelberg 2011
55
56
A. Pyka and P.P. Saviotti
of the technical objects or of the activities within each sector. The two phenomena are, however, closely related. The importance of the previous considerations changes drastically depending on whether qualitative change is only an effect of previous economic development or also a determinant of subsequent economic development. In the latter case changes in the composition of the economic system should become one of the important variables in models of economic growth and development. Our knowledge of the relationship between economic development and qualitative change is very limited. With our model of economic development by the creation of new sectors we attempt to shed light on some important aspects of the role played by qualitative change in economic development, by laying the foundations of a model in which changes in the composition of the economic system are endogenously generated by the evolution of the system itself and, in turn, affect its future development. To put it in another, slightly different, form, we can say that economic development is a process in which new activities emerge, old ones disappear, the weight of all economic activities and their patterns of interaction change. Most existing models of growth are macroeconomic models. Thus they do not take the composition of the economic system into account. This does not mean that it is impossible for any macroeconomic growth model to derive some of the implications of changing composition. However, it means that this possibility is limited. For example, in Romer’s models (1987, 1990) one of the outcomes of R&D activities is to increase the number of sectors producing capital goods. Obviously, this amounts to a change in the composition of the economic system. In Aghion and Howitt’s (1998) multisectoral extension of their endogenous growth model the existence of several sectors producing intermediate outputs and of their interactions are examined. However, the number of sectors does not change in the course of time. In general, in these models there is no indication of what the new capital goods could be or of their potential interactions with the consumer goods sectors. In fact consumer goods and services are the really missing character from all these models. This is probably due to the use of production functions that, while admitting many inputs, can produce only one output. In particular, the implications of these models for the variety of the economic system are unclear. In Romer’s models we can expect variety to increase, although we do not know under what circumstances and in what directions. In Aghion and Howitt’s models variety is likely to remain constant. Thus we can see here that endogenous growth models, while being an important improvement with respect to the Solow-Swann vintage, still have difficulties in coping with the dynamics of qualitative change taking place in the economy. Macroeconomic growth models are useful precisely because they do away with all, or some of the complexities inherent in the composition of the economic system. The simplification that they achieve in this way helps these models to achieve in a parsimonious and elegant way some important results, but limits their ability to deal with the composition of the economic system. Of course, such limitation is much more serious in the long
Economic Growth Through the Emergence of New Sectors
57
than in the medium or short run, since changes in composition affect the macroeconomic level rather slowly. Thus, the more profound limitation of macroeconomic growth models is likely to be their ability to analyse long term trends in economic development. Another type of research relevant for design of our model is that on structural change. Important examples of this research are the work by Salter (1960), by Cornwall (1977), and more recently by Fagerberg (2000) and by Fagerberg and Verspagen (1999). These works are mostly empirical, but an important attempt to formulate a theoretical model linking structural change and economic growth was made by Pasinetti (1981, 1993). The work of all these authors takes structural change into explicit account and they provide an important inspiration for our work. However, this past work on structural change still leads to a number of problems, at least some of which we aim at overcoming. First, the definition of structural change used by the previously quoted authors refers to the emergence of new sectors, to the disappearance of older ones and to their changing weights in the economic system. Aspects of qualitative change taking place at a lower level of aggregation, although having impacts at the sectoral level, are not taken into account. In this chapter the term qualitative change refers to a wider range of changes in the composition of the economic system. Second, the possibility to detect structural change and to study its effects depends heavily on the availability of statistical data about production and above all on the definition of industrial sectors used. Statistical classifications of production are changed infrequently and in ways that do not necessarily reflect the real changes taking place in the economy. Thus, as it emerges clearly from the work of Fagerberg and Verspagen (1999 and Fagerberg 2000) the industrial classification that they have to use in order to compare a large number of countries hides some types of structural change. Third, these studies on structural change have remained somewhat separate with respect to the macroeconomic growth models. Fourth, even the most sophisticated model linking structural change and economic growth, that of Pasinetti, has very limited dynamic features: it leads us to theconclusion that in the long run the economic system cannot follow a balanced growth path unless new sectors emerge and “absorb” the resources potentially displaced by the evolution of older sectors, but it does not tell us anything about the dynamics of emergence of new sectors or about their relationship to older ones. Our chapter is introducing the details of a model of economic development that includes qualitative change amongst its main determinants. Thus, we hope to contribute to a better understanding of the role of qualitative change and to bridge the gap between macroeconomic growth models and structural change studies. In what follows the conceptual nature of the present model will be explained, followed by the presentation of the more technical aspects of the model. One section of this chapter introduces to a selection of results of simulation experiments. The final section concludes and gives a brief outlook for future extensions of the model which are on our research agenda.
58
A. Pyka and P.P. Saviotti
A Model of Economic Development by the Creation of New Sectors Our model of economic development by the creation of new sectors has been developed over the past 7 years and has already undergone a number of modifications (see Saviotti and Pyka 2004a, b, c, 2008a, b, 2009, 2011). We will first give a brief verbal description of the model and then introduce the equations.
A Narrative Description of the Model In the model, each sector is generated by an important innovation. Such innovation creates a potential market and gives rise to what we call an adjustment gap. The term adjustment gap is due to the fact that, as soon as a potential market is created, it is in fact empty: neither the productive capacity nor the demand for the innovation is present. They are gradually constructed during the life cycle of the new sector. As the new sector matures, the adjustment gap is continuously closed: a productive capacity, which in the end matches demand, is created. When this happens, the sector enters its saturation phase. The productive capacity is generated by Schumpeterian entrepreneurs establishing new firms initially induced by the expectation of a temporary monopoly and related extraordinary profits. The success of the innovation gives rise to a band wagon of imitators. The number of firms in the new sector gradually rises, but this also raises the intensity of competition in the sector, thus gradually reducing the inducement to further entry. After the intensity of competition in the new sector reaches levels comparable to those of established sectors, the new sector is no longer innovating but becomes part of the circular flow. When a sector achieves maturity in the way described above, an inducement exists for Schumpeterian entrepreneurs to set up a new niche, which can eventually give rise to the emergence of a new industry. In other words, the declining economic potential of maturing sectors induces the creation of newer and more promising ones. Competition plays a very important role in this process of creation of new industries. Entrepreneurs are induced to establish new firms by the expectation of a temporary monopoly, that is, by the absence of competition. However, the new sector could not achieve its economic potential unless imitative entry took place. As a result, the intensity of competition rises, thus reducing the inducement to further entry. An additional contribution is made to the dynamics of our artificial economic system by inter-sector competition. Inter-sector competition arises when two sectors produce comparable services. Inter-sector competition is an important component of contestable markets (Baumol et al. 1982) and can keep the overall intensity of competition of the economic system high, even when each sector achieves very high levels of industrial concentration. In our model, the variety of the economic system plays an essential role. Economic variety is approximated by the number of different sectors. By raising
Economic Growth Through the Emergence of New Sectors
a
b
number of firms
300
sectoral employment and linear trend of aggregate employment
350 300
250
59
250 200
200 150
150 100
100 50
50 -
1
251
c
501
751
1
t
251
501
751
t
income development
60 50 40 30 20 10 1
251
501
751
t
Fig. 1 (a) Number of firms, (b) employment figure, (c) income
variety, the creation of new sectors provides the mechanism whereby economic development can continue in the long run. In this way, the economic system can escape the trap generated by the imbalance between rising productivity and saturating demand (Pasinetti 1981, 1993; Saviotti 1996), which would occur in a system at constant composition. This also affects the macroeconomic employment situation: In particular, this artificial economic system can keep generating employment, even when employment creation is falling within each sector (Saviotti and Pyka 2004b). In order to illustrate qualitatively the developments generated by our model, Fig. 1a shows the development of the number of firms in a certain industry. Within a wide range of conditions, the number of firms in each sector grows initially, reaches a maximum and then falls to a fairly low value. Within these conditions, each sector seems to follow a life cycle, similar to the ones detected by Klepper (1996), Jovanovic and MacDonald (1994), Utterback and Suarez (1993). However, in our model, this industry life cycle is created by variables very different from those used by the previous authors in their models which include increasing returns to R&D, radical innovations and the emergence of dominant designs. In our case, the cyclical behavior is caused only by the combined dynamics of competition and of market saturation. We do not wish to say that cyclical behavior cannot arise under the conditions identified by the previous authors. We simply say that cyclical behavior can arise also from the interplay of competition and of market saturation. Figure 1b then shows the course of development of employment in single industries – which first strongly increases, but in the shake-out period also is reduced considerably – and the trend of the aggregate employment on the macroeconomic level, which can be positive despite the decrease of sectoral
60
A. Pyka and P.P. Saviotti
employment. Fig. 1c displays the overall income development in our economic system. Despite the severe shake-out processes reducing drastically both the number of firms in mature industries and sectoral employment, Fig. 1b, c illustrate a decisive advantage of our approach of numerically modelling industry evolution. By aggregating the various figures we can not only observe the sectoral developments but also the macroeconomic figures, thereby detecting overall beneficial or disadvantageous developments.
The Simulation Model In the following paragraphs, we describe the formal aspects of our model in detail. Two remarks are necessary to ease the understanding of this formal description: First, to illustrate the implemented relationships we display figures which document the results of one numerical experiment we have labelled the standard scenario. A list of variables can be found in Appendix 1. In the results section we introduce further experiments where we modify the various constants. The second remark concerns these constants, which frequently can be found in our model. Basically they all serve as weights for the different factors which play a role for economic development. To shorten to some extent the following description we summarize these constants in a table in Appendix 2. Equation (1) is the central equation. It describes the change in the net number of firms dNit in each sector i in the course of time. Accordingly Nit is the number of firms in sector i at time t. The change in the net number of firms is the result of the processes of entry and exit into and out of the sectors. The term k1FAitAGit describes the rate of entry, the two terms – ICit and MAit describe processes of exit. The variables determining the rate of entry, FAit and AGit, are the financial availability and the adjustment gap respectively; the variables determining the exit are the intensity of competition ICit and mergers and acquisitions MAit. The four entry and exit terms are graphically displayed in Fig. 2a–d. Their meanings and how they are computed is described immediately below. Figure 1a represents graphically the change in the number of firms in different sectors. As it can be seen, the number of firms rises at first, reaches a maximum and then falls to a low value. As already mentioned, the number of firms in subsequent industries indicates the shape of an industry life cycle in our standard scenario. dNit ¼ k1 FAti AGti ICti MAti
(1)
This specific pattern emerges from the interplay of the four variables on the right hand side of (1). FAit is the term for financial availability, a variable we expect to contribute positively to industry development. In the standard scenario displayed in Fig. 2a the financial availability for previous sectors is endangered with the emergence of a follow-up industry with high growth rates (see Saviotti and Pyka 2009).
Economic Growth Through the Emergence of New Sectors
a
b
financial availability
1.2
61
adjustment gaps
1.4 1.2
1.0
1.0 0.8 0.6 0.4
0.8 0.6 0.4 0.2
0.2 -
1
c
251
501
751
d
intensity of competition
14 12 10 8 6 4 2
1
t
251
501
751
t
mergers & acquisitions
0.6 0.5 0.4 0.3 0.2 0.1
-
1
251
501
751
t
1
251
501
751
t
Fig. 2 (a) Financial availability, (b) adjustment gaps figure, (c) intensity of competition, (d) mergers and acquisitions
AGit (Fig. 2b) is the adjustment gap at time t and describes the potential size of a market. When a new industry emerges a new adjustment gap is opened up which is positively influenced by the search activities in an industry and negatively influenced by an increasing sectoral productivity and saturated demand. ICit (Fig. 2c) is the intensity of competition at time t composed of two influences stemming from intra-industry and from inter-industry competition. The intensity of competition increases and declines with the number of firms in the respective industry and is influenced positively by the emergence of subsequent industries later on (see Saviotti and Pyka 2008a). Finally, the term MAit (Fig. 2d) is the number of mergers and acquisitions at time t. During industry development a consolidation process fed by mergers and acquisitions and failures shapes the evolution (see Saviotti et al. 2007).
Financial Availability Financial availability represents the amount of financial resources which are available for investment in sector i. It is not just the amount of resources in the economic system. It reflects also the knowledge required to estimate the economic development potential of new industries. In fact it is the fraction of the financial resources of the whole economic system which investors are prepared to invest in sector i. Thus, FAit depends on the expectations of the market potential of given
62
A. Pyka and P.P. Saviotti
innovations. We can expect growing financial availability to accelerate the rate of entry of firms into a sector. The comparison of different constant levels of financial availability will only tell us that an economic system able to invest more in an emerging sector will have a higher rate of growth of firms in the sector but it will not give us a realistic picture of the dynamics of investment in emerging sectors. Thus, it is possible to assume that financial operators will observe the behaviour of the economic system and spot the potential of emerging sectors. If they notice that some aspects of an emerging sector, for example the rate of creation of new firms, grow more rapidly than average for the whole economic system, they can invest more heavily in the emerging than in the average – and typically more mature – sector in the expectation to enjoy higher than average (supranormal) rates of profit. This kind of behaviour is represented by (2): " FAti
¼ k2 1 þ k 3
dNit X dNj dt dt j
t
!# (2)
Equation 2 is a form of replicator dynamics and it tells us that the investment in sector i (FAit) depends on the extent to which the rate of growth of the number of firms in sector i is above that for the average in the rest of the economic system. However, for a given difference between the rate of growth of the number of firms in sector i and in the rest of the economic system the amount of FAit will depend on the two parameters k2 and k2, where k2 is the amount of financial resources available in the whole economic system, and k3 is the optimism (or over optimism) of financial operators with respect to the development potential of sector i. A typical development of FAit is displayed in Fig. 2a.
Adjustment Gap The term adjustment gap AGit indicates the expected size of the market created by a pervasive major innovation. The term gap indicates something to be compensated or filled. In fact, as the innovation emerges there is neither a demand nor a production capacity for it. Potential users and consumers do not know about its existence and properties and entrepreneurs have not yet had the time to invest in new production facilities. As demand grows and production facilities are built the gap is gradually closed giving rise to a saturated market. The gap may never be completely closed if the output of the sector keeps changing in a qualitative way. An example of this change would be today’s cars compared to those of Henry Ford’s era. In today’s cars there are many new internal structures and functionalities, supplying new services, which were completely absent in much older cars. The implication of this is that while a sector can saturate in volume terms it will not necessarily saturate in value terms (Saviotti et al. 2007). Also, we cannot expect the size of the adjustment gap to fall at all times after the emergence of a new sector. During the life cycle of the sector innovations, the result of search activities
Economic Growth Through the Emergence of New Sectors
63
(SEit, see later) have two effects: (1) they increase efficiency, thus reducing costs and prices, (2) they increase the services supplied by the product (Yit, see later) and the degree of product differentiation (DYit, see later). As a consequence of both (1) and (2) the population of potential adopters of the output of sector i grows. Thus AGit can even grow in the intermediate phases of the life cycle of the sector. Eventually, even if no complete saturation takes place, the size of AGit will fall below its maximum. We can then distinguish in the life cycle of each sector an early and more entrepreneurial period in which there are high rates of dNit, of profit etc. and a more mature and more managerial period during production which processes would become relatively more routine like. AGit can be defined as the difference between maximum demand and instant demand (3). Figure 2b shows the dynamics of the adjustment gaps of different sectors in the system. AGti ¼ Dmaxti Dti
(3)
Demand and Prices In our model demand depends on product price pit, on the services Yit supplied by the product, on product diversification DYit and the disposable income for purchases in this sector Dispoit relative to the macroeconomic overall income (Incomet) according to (4). We can expect demand to increase as pit (product price) falls and as Yit (services supplied by the product) and DYit (product differentiation) rise. Also, pit can be expected to fall due to growing efficiency of production processes and to rise due to growing product quality or to growing product differentiation. The relative share of income available for sector i (Dispoit/ Incomet) exerts a positive influence as well and is of particular importance when the new sector emerges. Equation (4) describes the formal relationship for demand in our model. The constant kprefi is introduced in order to investigate the influence of changing demand preferences concerning the output of a sector. i k4 Dti ¼ kpref
Dispoti Yit DYit Incomet pti
(4)
The development of the sectoral demand in our standard scenario is displayed in Fig. 3. Demand first strongly increases due to an increasing production efficiency which leads to decreasing product prices. An enlargement of the services provided and the product differentiation increases demand as well. Saturation tendencies and competition from other sectors finally decreases sectoral demand. The services Yit supplied by a given product i (5) and the product differentiation DYit (6) can be expected to vary in the course of time due the effect of sectoral search activities SEit. We expect Yit and DYit to rise according to a logistic equation. Only search activities can lead to their growth, although this will occur at different rates due to the effectiveness of search activities in different sectors.
64
A. Pyka and P.P. Saviotti demand 1.0 0.8 0.6 0.4 0.2 1
251
501
751
t
Fig. 3 Demand
product services 1.2 1.0 0.8 0.6 0.4 0.2 1
251
501
751
1001
1251
1501
1751
t
Fig. 4 Product services
This differential effectiveness is described in the model by the parameters k5 and k6 for services, k7 and k8 for product differentiation. Yit ¼ y0 þ
1 1 þ exp½k5 k6 ðSEti SE0 Þ
DYit ¼ Dy0 þ
1 1 þ Exp½k7 k8 ðSEti SE0 Þ
(5)
(6)
Again for the standard scenario the development of product services Yit is displayed in Fig. 4. In early periods the potential for improvements of services is large and the service level increases accelerated. In later periods the possibilities for further improvements are increasingly exhausted and the respective development slows down and finally stops. The development of the product differentiation DYit looks similar in dependence of the respective constants. Prices pit are calculated in (7) as unit costs ucit plus a mark up kMU in (8). Unit costs are determined by labour costs (labouritwit (wit:¼wages in sector i at time t)),
Economic Growth Through the Emergence of New Sectors
65
physical capital costs (Investmentiti (i:¼interest rate)), and by the effects of increasing services and increasing product differentiation. The mark up kMU in the standard scenario is fixed at 20% of unit costs. pti ¼ p0 þ kMU ucti
(7)
ucti ¼ 1 þ k9 ðlabourit wit1 þ Investmentti iÞ exp k11 Yit þ k12 DYit wagesti ¼ kit wages
Qit1 Pit1 labourit
(8) (9)
kit wages ¼ wages0 þ k10 wagesti1
(10)
Sectoral wages wagesit are determined by the industry turnover from the previous period Qit1Pit1 and the size of the sectoral workforce labourit (9). Furthermore, the weight ktiwages is composed of a constant wages0 and a part which leads to higher wages in consecutive sectors. In Fig. 5a the development of unit costs is displayed. Unit costs start to increase first driven by increasing wages (displayed in Fig. 5c). The increasing wages are finally overcompensated by an increased productivity, which leads to falling unit costs again. The development of the prices (Fig. 5b) follows unit-costs closely.
a
b
unit costs
2.0
7
1.5
7
1.0
6
0.5
6
–
5 1
251
c 1.4 1.2 1.0 0.8 0.6 0.4 0.2 –
prices
8
501
751
t
751
t
wages
1
251
501
Fig. 5 (a) Unit costs, (b) prices, (c) wages
1
251
501
751
t
66
A. Pyka and P.P. Saviotti
accumulated demand
600 500 400 300 200 100 -
search activities
20 15 10 5 -
1
251
501
751
t
1
251
501
751
t
Fig. 6 (a) Accumulated demand, (b) sectoral search activities
Search Activities Search activities SEit are an important component in our model. They are a considered to be a generalized analogue of R&D. Search activities include all the activities which scan the external environment to look for alternatives to existing routines. Search activities can be fundamental or sectoral. We can expect search activities to be affected by demand (11). In particular, sectoral search activities rise with the accumulated demand Daccit [(14) and Fig. 6a] for the output of the same sector [(12) and Fig. 6b]. SEti ¼ SE0 þ k13 ½1 expðk14 Daccti Þ X Dti Daccti ¼
(11) (12)
t
The maximum demand Dmax,it defines the maximum possible size of the market for sector i. The above described concept of adjustment gap AGit is defined as the difference between maximum and instant demand. Since at time zero (creation of the new market) instant demand Dit is equal to zero Dmax,it represents the maximum possible size of the market for i at the time of the creation of the market. This maximum size is influenced by the technological opportunities already exploited for this industry TIit which again are a function of search activities and described in (13). TIit ¼
1 1 þ Exp½k15 k16 ðSEti SE0 Þ
(13)
The maximum demand Dmax,it the is defined as the ratio between instant demand and exploited opportunities TIit (14). Figure 7a displays the exploitation of sectoral opportunities. Figure 7b shows the development of maximum demand.
Dit
Dmaxti ¼
Dti TIit
(14)
Economic Growth Through the Emergence of New Sectors
Exploitation of sectoral opportunities
0.5
maximum demand
2.0
0.4
67
1.5
0.3
1.0
0.2
0.5
0.1 -
1
251
501
751
t
1
251
501
751
t
Fig. 7 (a) The exploitation of sectoral opportunities, (b) maximum demand
Income and Disposable Income The final variable which appears in the demand equation (4) is disposable income Dispoit which describes the share of income which is available for purchases in sector i. The disposable income plays a decisive role for the emergence of new industries for which a certain of amount of income available to buy the goods and services of the new sector is indispensable. Accordingly, (15) calculates the disposable income for sector i as the part of the macroeconomic income (displayed in Fig. 1c) which is not spent in the other sectors j (j 6¼ i). Equation (16) describes the determination of the macroeconomic income (incomet): Dispoti ¼ Incomet
incomet ¼
X
n X
j¼1 j 6¼ i pti qti
ptj Dtj
(15)
(16)
i
The disposable income (Fig. 8) follows the general raising income trend in our standard scenario. With the emergence of a new industry, the overall income raises and the disposable income of the previous industry first benefits from this. In time an increasing share of the income shifts to the new industry thereby supporting the demand dynamics in this new industry.
Intensity of Competition The first exit term we describe in detail is the intensity of competition ICit. It is the combined result of intra-sector and inter-sector competition (see 17). Intra sector competition is very similar to the one which is normally found in textbooks. Intersector competition occurs when different sectors produce products supplying
68
A. Pyka and P.P. Saviotti disposable income 25 20 15 10 5 1
251
501
751
t
Fig. 8 Disposable income
comparable services. In the present form the effect of intra-sector competition is represented by Nit, the number of firms in sector i at time t, and the intensity of inter-sector competition is represented by Ntott, the total number of firms in the whole economic system. The constant k17 is a measure of the competitiveness of the economic system, as determined, for example, by anti-monopoly laws etc. The constant RII is the ratio between inter- and intra-sector competition. When k18 ¼ 0 there is only intra-sector competition and as k18 grows inter-sector competition raises its importance. This is an approximate form since we can expect the intensity of competition to be affected also by the extent of product differentiation. ICti ¼ k17
t Nit1 Ntotal t þ k18 Ntotal
Nit1
(17)
The intensity of competition for the various sectors we observe in our standard simulation is displayed in Fig. 2c. We see that due to the strong entry in the entrepreneurial period of an industry the intensity of competition raises strongly driven by the increasing number of firms in a sector. It slows down after a peak is reached and is shifted moderately higher when new sectors enter the scene, which indicates the impact of inter-industry competition.
Failure, Mergers and Acquisitions The merger and acquisitions term MAit which is the second exit variable in (1) describes the failure and consolidation process during the shake-out period of an industry life cycle. In Fig. 2d we see that MAit first strongly increases in young industries indicating a high rate of failure in the entrepreneurial period. It slows down again with the coming up of a dominant design and the establishment of the industry. Formally, (18) describes MAit in dependence of the marginal costs MCit and the search activities SEit as well as instant demand Dit. Marginal
Economic Growth Through the Emergence of New Sectors
69
costs MCit so far are simply a constant although easily other relationships are conceivable. MAti ¼ k19 Nit1 MCti ¼ MC0 ;
MCti SEti Dti
MC0 ¼ const ¼ 1
(18) (19)
Production On the supply side the sectoral output Qit is determined by the sectoral search activities SEit, the physical capital stock CSphysicalit and the level of human capital HCit as described in (20): Qti ¼ Q0 þ k20 ð1 þ atci Þ ½1 expðk21 SEti k23 CSphysicalti k24 HCti Þ (20) To adjust the supply side with the demand side of our sectoral economic system we calculate the excess demand EXDit in each period (21) as the difference between demand and output. The excess demand displayed in Fig. 9b then is used to determine an adjustment variable ait. This adjustment variable (Fig. 9c) is limited to an upper and lower value (0.5 in the standard scenario) which does not allow for extreme reactions. This value is accumulated in (23) to the production adjustment variable acit used in the output (20). EXDti ¼
Dti Qti Dti
ati ¼ ðEXDti Þ2 atci ¼
X
ati
(21) (22) (23)
t
Investment and Basic Research Besides the search activities SEit which are responsible for increasing production efficiency, the production Qit is also determined by the physical capital stock CSphysicalit and the level of human capital HCit which follow the investment decisions in our economic system. The overall periodical volume of investment Total_Investmentt (24) is a fraction (ms: ¼ marginal savings rate) of the macroeconomic income (incomet) displayed in Fig. 1c. The investment is
70
A. Pyka and P.P. Saviotti
a
b
output
3.5 3.0 2.5 2.0 1.5 1.0 0.5 -
excess demand
6 4 2 -2
1
251
501
751
t
-4 1
251
c
501
751
t
-6
alpha
0.6 0.4 0.2 -0.2
1
251
501
751
t
-0.4 -0.6
Fig. 9 (a) Output, (b) excess demand, (c) adjustment parameter
distributed on the different sectors following the relative size of the sectors share_sectorit (25). Total Investmentt ¼ ms Incomet
(24)
ms: ¼ const ¼ 0.25 (marginal saving rate) share sec torit ¼
pti Qti Incomet
(25)
The physical capital stock is then the accumulated and depreciated outcome of the sectoral investment as described in (26) and displayed in Fig. 10. CSphysicalti ¼ ð1 rÞ CSphysicalit1 þ share physical share sec torit Total Investmentt
(26)
Besides the physical capital stock, also human capital HCit matters for production. The level of human capital is determined by the investment in human capital which determines the capital stock of investment in human capital (CSed,it) in (27) and together with the quantitative size of the labour force (labourit) determines the level of human capital HCit in (28) and displayed in Fig. 11. hti ¼ k22 CSed ti
(27)
Economic Growth Through the Emergence of New Sectors
71
physical capital stocks 400 350 300 250 200 150 100 50 1
251
501
751
t
Fig. 10 Physical capital stocks Level of sectoral Human Capital 120 100 80 60 40 20 1
251
501
751
t
Fig. 11 Level of sectoral human capital
HCti ¼ labourit hti
(28)
Nit1 Qit1
(29)
labourit ¼ k25 employmentt ¼
X
labourit
(30)
i
The sectoral labour force is an important figure for the observation of macroeconomic trends in our economic system. As we can aggregate the sectoral employment figures to a macroeconomic employment figure (see Fig. 1b) we can not only make propositions on the sectoral level but also on the aggregate level. The sectoral labour force labourit is determined by the number of firms in the previous period Nit1 and the sectoral output of the previous period Qit1 (29). The overall employment variable is calculated in (30). Besides sectoral research in our economic system fundamental research activities SEFt focus on the exploration of new technological opportunities which might lay the foundation for new industries. Investment in fundamental
72
A. Pyka and P.P. Saviotti
research activities depend on the share of investment in fundamental research shareRD and the overall investment Total_Investmentt. Equation (31) describes the calculation of fundamental research activities: SEFt ¼ shareRD Total Investmentt
(31)
The Emergence of New Industries One of the most distinctive features of our economic system is the endogenously variable number of sectors. The creation of a new sector is induced by the previous dynamics of the economic system. For a new industry i to emerge a few conditions need to be fulfilled. First, sector i1 has to become saturated; this is indicated by an adjustment gap of the sector i1 which starts to close (AGi1t1 > AGi1t). A closing adjustment gap is indicating declining opportunities for growth, and profit-oriented entrepreneurs start searching for new opportunities. With a closing adjustment gap in industry i1 we start a stepwise increasing counter variable (counterit) which simply counts the periods of declining opportunities in sector i1. Each period then an entry threshold is calculated which decreases with increasing accumulated fundamental research activities (32): EntryThresholdit ¼ k26 ðk27 SEFt Þ
(32)
Always when the condition counterit > EntryThresholdit is fulfilled, the new industry i starts to work and a new industry life cycle is set up. In Fig. 12 the development of the sectoral entry threshold is displayed. In periods with increasing fundamental research activities the threshold decreases. Figure 12 also displays the counter variable which starts to increase when the adjustment gap of a previous industry starts to decline. Figure 13 summarizes the complex interdependent structure of our model. It has to be mentioned for this graphical representation of the model that circular connections are to be disbanded. In the simulation model this is done by a
entry threshold (bold) and counter variable 40 30 20 10
Fig. 12 Entry threshold and counter variable
1
251
501
751
t
Economic Growth Through the Emergence of New Sectors
73
Dmax, it Dispoi AGit FAit Nit
Dit
Yit
SEit
pit
ucit
dNit ICit
labourit investmentit
MAit
incometit
Q it
SEFi
Fig. 13 Structure of the model
sequential structure for the various calculations; so for example the various investment decisions in one period, which are determined by the income in this period, start to exert their influences only one period later.
Simulation Experiments In this section we give an overview on the various experiments we have performed so far. Before going into detail a few remarks concerning the numerical experiments are necessary. Usually, for experiments one or several parameters are varied and the results are compared with a standard scenario. Obviously, the model has to be tested for stability and robustness. This has been done in several papers (Saviotti and Pyka 2004b, 2008a) and also new methodologies for robustness checks have been developed. The model is supposed to produce qualitative stable results for the experiments which in some circumstances might none the less mean that the corridor of possible values of single figures can be broad. As we have not used any stochastic formulation in the model we can dispense Monte-Carlo simulations. The following seven experiments were chosen to illustrate the broad applicability of our model to the analysis of a wide set of questions which deal with structural change, economic development and growth. The first three experiments taken from Saviotti and Pyka (2004a) deal with very general aspects of the model (variation of technological opportunities, variation of learning rates and variation of production efficiencies) and allow to gain a better understanding on the basic dynamics generated by our model. Experiment 4 (Saviotti and Pyka 2004c) then tackles
74
A. Pyka and P.P. Saviotti
already a macro-economic question and analyzes the relationship of sectoral development and macroeconomic employment figures. Experiment 5 highlights the important relationship between variety and economic development we allows us to confirm in Saviotti and Pyka (2004c) the decisive role of a heterogeneous industrial structure for growth and development which is inaccessible in traditional models of economic growth. In a similar vein experiment 6 performed in Saviotti and Pyka (2008a) shed light on the inter-sectoral coordination on the macroeconomic level, i.e. the critical question of the timing concerning the emergence of new sectors. The final experiment 7 (Saviotti and Pyka 2009) is used to demonstrate one of the decisive advantages of our numerical model of economic development by the emergence of new industry, namely the investigation of co-evolutionary processes which introduce a high degree of complexity. In particular the relationship between the real sphere of technology development and the monetary sphere of financial availability is analyzed and it is shown that due to the co-evolutionary dynamics, there is scope for chaotic developments.
Experiment 1: Varying Technological Opportunity The constant k13, which appears in (11) for search activities SEit, measures the extent of technological opportunity of the technology that created a new sector. In the standard scenario k13 had the same value for the different populations. In this experiment we varied the values of k13 for different populations. We explored the situation in which the technological opportunity of each subsequent population is higher than that of the pre-existing one. Although there is no guarantee that such a condition is going to apply systematically to all new emerging sectors, it is also not an implausible one in particular cases. In the past it was considered that agriculture had an intrinsically lower potential for productivity improvement than manufacturing. Electronics and information based sectors seem to manifest a higher rate of productivity growth than historically displayed by mechanical sectors. With these considerations we do not want to prove that the pattern of increasing technological opportunity by subsequent populations of firms is of general applicability, but simply that there are a number of cases in which this could happen. We are exploring a scenario in which technological opportunity increases for each subsequent population. In the experiment we used two sets of values, corresponding to experiments 1a and experiment 1b respectively (Table 1). These changes (Fig. 14) lead to an earlier start of the life cycle of populations 2 and 3, to a higher maximum number of firms at the peak of the life cycle, to an Table 1 Values of the constant k13 used in experiments 1a and 1b, and different from those of the standard scenario
Experiment 1a k131 ¼ 40 k132 ¼ 50 k133 ¼ 60
Experiment 1b k131 ¼ 40 k132 ¼ 60 k133 ¼ 80
Economic Growth Through the Emergence of New Sectors number of firms in a population (standard scenario) 20
75
number of firms in a population (experiment 1a) 15
15 10 10 5
5 0
0 1
251
501
population 1
751
1001
population 2
1251
1
251
501
population 1
population 3
751
1001
population 2
1251 population 3
number of firms in a population (experiment 1b) 15 10 5 0 1
251
501
population 1
751
1001
population 2
1251 population 3
Fig. 14 Influence of technological opportunity on the number of firms
demand (standard scenario)
demand (experiment 1a) 50
35 30 25 20 15 10 5 0
40 30 20 10 0 1
251
501
population 1
751
1001
population 2
1251 population 3
1
251
501
population 1
751 population 2
1001
1251 population 3
demand (experiment 1b) 70 60 50 40 30 20 10 0 1
251
501
population 1
751 population 2
1001
1251 population 3
Fig. 15 Influence of technological opportunity on demand
increase in the steady state level of demand of each population with respect to the pre-existing ones (Fig. 15) and to an increase of the maximum level of demand of each population with respect to the pre-existing ones (Fig. 16). These effects are qualitatively the same, but greater in experiment 1b, where the increase in technological opportunity is even greater.
76
A. Pyka and P.P. Saviotti intensity of competition (standard scenario)
0,25
intensity of competition (experiment 1a) 0,25
0,2
0,2
0,15
0,15
0,1
0,1
0,05
0,05
0
0 1
251
501
population 1
751
1001
population 2
1251 population 3
1
251 population 1
501
751 population 2
1001
1251 population 3
intensity of competition (experiment 1b) 0,3 0,25 0,2 0,15 0,1 0,05 0 1
251 population 1
501
751 population 2
1001
1251 population 3
Fig. 16 Influence of technological opportunity on the intensity of competition
Summarising, we could say that the effect of an increase in technological opportunity of each new sector with respect to pre-existing ones, is an acceleration of the process of economic development, determined by a faster emergence of new sectors, and an increase in the scope of economic development, as shown by the increase in the number of firms that each sector can support and by the higher levels of demand eventually achieved in each sector.
Experiment 2: Varying the Rate of Learning In this experiment we varied the value of the constant k14 for different populations with respect to the standard scenario. K12 takes a higher value for each subsequent population, that is k141 < k142 < k143. The values used are indicated in Table 2. Given the meaning of k14, this experiment is equivalent to increase firms’ rate of learning. Three versions of the experiment are performed. The consequence of increasing firms’ rate of learning is to accelerate the emergence of populations 2 and 3 with respect to the standard scenario. This has an equivalent effect of the number of firms and on demand. Entry in population 2 takes place earlier when the rate of learning for population 2 increases relative to that of population 1. The same type of change takes place for population 3, whose emergence is also accelerated by an increase of its rate of learning relative to that of population 2 (Fig. 17). Remembering that in our model a population corresponds to an industrial sector, we can see that a faster rate of learning can lead to an earlier emergence of a sector. Contrary to what happened in experiment 1, in this case only the time path of the number of firms seems to be affected, not its value.
Economic Growth Through the Emergence of New Sectors Table 2 Values of k14 used in experiment 2 and different from those of the standard scenario
Experiment 2a k141 ¼ 0.005 k142 ¼ 0.01 k143 ¼ 0.015
Experiment 2b k141 ¼ 0.005 k142 ¼ 0.015 k143 ¼ 0.025
number of firms in a population (standard scenario)
Experiment 2c k141 ¼ 0.005 k142 ¼ 0.025 k143 ¼ 0.05
number of firms in a population (experiment 2a)
20
20
15
15
10
10
5
5
0
77
0 1
251
501
population 1
751 population 2
1001
1
1251 population 3
population 1
number of firms in a population (experiment 2b)
population 2
population 3
number of firms in a population (experiment 2c)
20
20
15
15
10
10
5
5
0
163 325 487 649 811 973 1135 1297 1459
0 1
163 325 487 649 811 973 1135 1297 1459 population 1
population 2
population 3
1
163 325 487 649 811 973 1135 1297 1459 population 1
population 2
population 3
Fig. 17 Influence of the rate of learning on the number of firms
The maximum number of firms in each of the three populations remains approximately equal to that of the standard scenario, but it begins to increase earlier for populations 2 and 3. The behaviour of sectoral demand is affected in a similar way (Fig. 18). The demand for each sector output begins to rise more rapidly when we increase the rate of learning, up to the point where the demand for the output of sector 2 can overtake that of sector 1. Long run demand for the output of a sector does not seem to be affected by a change in learning rate. This leads us to expect that the aggregate number of firms and the aggregate demand will grow faster the faster the rate of learning. However, in our model a faster rate of learning does not bring joy for everyone. While the system may display a faster growth the number of firms in population 1 begins to fall sooner, due to the inter-sector competition provided by the increased number of firms in populations 2 and 3. If the development of the system can be described by means of the life cycles of different sectors, these life cycles follow a time path that is determined both by the intrinsic features of each sector and by its interactions with other sectors in the economy. We can observe that in the most extreme case considered here (experiment 2c), when the rates of learning of populations 2 and 3 are the highest with respect to population 1, the number of firms and the sectoral demand of population 2 very soon overtakes those of populations 1 and 3.
78
A. Pyka and P.P. Saviotti demand (standard scenario)
demand (experiment 2a)
35 30 25 20 15 10 5 0
35 30 25 20 15 10 5 0 1
251
501
population 1
751
1001
population 2
1251
1
population 3
163 325 487 649 811 973 1135 1297 1459 population 1
demand (experiment 2b)
population 2
population 3
demand (experiment 2c)
35 30 25 20 15 10 5 0
35 30 25 20 15 10 5 0 1
163
325 487 649 811 973 1135 1297 1459
population 1
population 2
1
population 3
163 325 487 649 811 973 1135 1297 1459 population 1
population 2
population 3
Fig. 18 Influence of the rate of learning on demand
If we compare the results of experiments 1 and 2, we can see that the general effect of an increase in technological opportunity (experiment 1) is both, to accelerate the creation of new sectors and to increase their scope, that is, to increase the maximum number of firms in each sector, demand and maximum demand. On the other hand, an increase in the rate of learning of one sector relative to the others only influences the time path of the emergence of new sectors and not their size, as measured either by the number of firms or by demand. Thus technological opportunity is seen as having a greater expansive effect on the development of the whole system than an increase in the rate of learning. However, an increase in the rate of learning can also lead to an overall faster growth for the whole system, but this growth is obtained more by greater efficiency than by an expansion of the markets of the various sectors. In both cases the general improvement of the growth potential of the economic system leads to a collective improvement, but some agents in the system may suffer. Thus an emergence of new sectors may mean faster growth, but it also leads to an earlier and faster emergence of competition for pre-existing (traditional) sectors. All the analysis so far has been based on individual if interacting populations of firms. The model can also help us to understand the consequences of sectoral dynamics for aggregate growth. The total number of firms and aggregate demand for the whole economy are represented in Fig. 19. Here we can see that aggregate demand is stimulated more by an increase in technological opportunity than by an increase in the rate of learning. The total number of firms grows more rapidly if the technological opportunity or the rate of learning of successive populations are increased with respect to the standard scenario. However, in this case we can see that if the economy were not to generate
Economic Growth Through the Emergence of New Sectors
a
79
aggregate demand
140 120 100 80 60 40 20 0 1
251
501 standard scenario
b
751
1001
experiment 1b
1251 experiment 2b
total number of firms
35 30 25 20 15 10 5 0 1
251
501 standard scenario
751 experiment 1b
1001
1251 experiment 2b
Fig. 19 Aggregate curves with the results of experiments 1 and 2. (a) Shows the influence of technological opportunity and of the rate of learning on demand; (b) shows the influence of technological opportunity and of the rate of learning on the number of firms in each sector
any more new sectors after sector 3, the number of firms would converge irrespective of the technological opportunity or of the rate of learning. We have to bear in mind that the behaviour of the system after the “maturity” of sector 3 is artificial in the sense that we expect new sectors to emerge. The experiment is valuable nevertheless because it tells us that if the emergence of new sectors were to slow down, for example due to the influence of particular economic policies, the overall number of firms in the economy would fall. Increasing intensity of competition, failures, mergers and acquisitions would tend to reduce continuously the number of surviving firms. The total number of firms can only increase or at least remain constant if new sectors are continuously created. Conversely from these results we can expect that in absence of new sectors the system will converge on a set of monopolies, their number being equal to that of the surviving sectors. The same cannot be said about aggregate demand. Here the higher growth path that is
80
A. Pyka and P.P. Saviotti
established by an increase of either technological opportunity or of the rate of learning persists beyond the maturity of sector 3. By combining these two results we can expect that the output per firm will continuously increase after the maturity of sector 3.
Experiment 3: The Efficiency-Variety Trade-off According to (20) the average output produced by firms in a given sector i is determined by the level of search activities in the same sector and by a constant k20, which can be considered a measure of efficiency in the same sector. The constant k20 can be considered both a measure of efficiency, since at equivalent SE it increases the average firm output in the sector, and a measure of non searchbased learning, that is, for example, of learning by doing. In this experiment we varied the value of k20 in order to explore the effect of firm efficiency on economic development. The results we obtained are represented in Fig. 20a, b, which represent the effect of k20 on total demand and on the number of firms. The results of this experiment are as follows: Rate of creation of new sectors: The higher the value of k20 and thus the higher the efficiency of a sector i, the faster the rate of creation of subsequent sectors. If we were to call the time at which each sector first appears ti,0, then ti,0 would be inversely proportional to k20. Thus the effect of generally increasing firms productive efficiency in all the sectors of the economic system would be to accelerate the “tempo” of economic development. This behaviour can be explained by means of the inducement that the saturation of a given sector provides for the creation of subsequent sectors. In turn, this inducement is determined by increasing intensity of competition and by the saturation of demand. On the whole, a higher efficiency in sector i leads to a faster saturation of the same sector, thus triggering the creation of the next one (i + 1). Total demand: The total demand for each sector, and thus for the whole economic system, increases with increasing k20. This is due to two reasons: first, since each sector emerges earlier, at any subsequent time the number of sectors in the economic system is greater for a greater value of k20. Second, the steady state level of demand achieved within each sector increases with k20. Thus total demand at any time t will be greater because there are more sectors in the economic system and because each sector has a higher steady state demand. The number of firms in each sector: The maximum number of firms achieved in each sector at the peak of the life cycle falls with increasing k20. The total number of firms in the economic system (Fig. 20a) increases more slowly when the average efficiency of each sector is higher. A faster rise in output, corresponding to a higher intensity of competition, leads to more exit and to more mergers and acquisitions. Thus the maximum number of firms reached in the life cycle of each sector falls with the increasing efficiency of the same sector. Thus the steady state rate of
Economic Growth Through the Emergence of New Sectors
81
120 100 80 60 40 20 0 0
500 standard
experiment 1
1000 experiment 2
experiment 3
1500 experiment 4
30 25 20 15 10 5 0 0
standard experiment 3
500
experiment 1
1000
experiment 2
1500
experiment 4
Fig. 20 Effect of the productive efficiency of firms on the number of firms (b) and on demand (a). Efficiency increases in experiments 3 and 4 and falls in experiments 1 and 2. The effect on variety is represented by the number of sectors existing at a given time
growth in the total number of firms (Fig. 20a) falls with increasing efficiency. In our calculations so far we varied by the same extent the efficiency of all sectors. Of course, this does not need to be the case in a real economic system. In fact we expect efficiency and its rate of growth to vary differentially amongst sectors, thus leading to structural change. This possibility will be explored in subsequent experiments. The results described above provide a considerable if not definitive confirmation for the hypothesis about the complementarity between variety growth and efficiency growth (Saviotti 1996). A greater efficiency allows the more rapid creation of new sectors, and leads to a greater net number of sectors in the economic system at a given time, that is to a higher variety. These results have some interesting implications for economic development. First, if we consider the general development of the world system without focusing on any particular country, we can see that the rate of growth of the economic system
82
A. Pyka and P.P. Saviotti
increases with the increasing efficiency of each sector. We have to bear in mind at this point that the calculations that we performed so far attribute the same value of k18 to all the sectors. Thus the development we have analysed is a proportional form of development, in which all sectors progress in the same way. Within this framework the saturation of any given existing sector accelerates the rate of creation of subsequent ones. This type of development and structural change takes place because new sectors are continuously (in the long run) added to the economic system. Thus the composition of the system changes and this change in composition drives its rate of growth. A clear relationship exists in our artificial economic system between structural change and changes in the composition of the system on the one hand, and the rate of growth of the system on the other hand. However, as pointed out before, this type of structural change is not the only possible one. As pointed out above, a different efficiency of each sector would add another component to the process of structural change by making some sectors with high values of k20 grow more rapidly than others. This second component of structural change has not yet been analysed in our experiments.
Experiment 4: Sectoral Employment and Productivity Sectoral and total employment calculated for the standard scenario is shown in Fig. 21. The dynamics of sectoral employment follows closely that of firm creation, with the rate of employment growth being particularly high in the early phases of the life of a new sector and then declining gradually. In Fig. 21 an aggregate representation has been superimposed on the sectoral curves by adding up the contributions to employment of the different sectors at each time. This aggregate representation is particularly useful if we are more interested in the impact of variables such as productive efficiency, technological opportunity, rate of learning sectoral & total employment (linear trend)
L 30
20
10
0 1
251
501
751
sectoral employment
1001
1251
total employment
Fig. 21 Evolution of sectoral and total employment in the standard scenario
t
Economic Growth Through the Emergence of New Sectors
83
100% 75% 50% 25% 0% time
500 sector 1
sector 2
1000 sector 3
sector 4
Fig. 22 Evolution of the sectoral output shares with the emergence of new sectors
etc. on the aggregate properties of the system than in the internal mechanisms of each sector. What is immediately clear is that employment creation within each sector tends to decline and that overall employment can keep growing only due to the emergence of new sectors. Thus, the process of qualitative and structural change is a determinant of employment growth. This conclusion is reinforced by Fig. 22, which displays the output shares of different sectors. As the shares of old sectors declines that of emergent ones rises first and then starts declining as the once new sectors move towards maturity. An interesting implication following from Fig. 21 is that a relatively stable macroeconomic growth pattern is produced by a much more turbulent micro-economic evolution of individual sectors. To the extent that the patterns of sectoral evolution described here are common, the achievement of a stable macroeconomic growth pattern can only be obtained by creating new sectors that is by changing the composition of the economic system. In this sense the flexibility required of the economic system is the ability to create new sectors, or its creativity. We can also notice (Fig. 23) that productive efficiency and employment change in opposite directions during the development of each sector: productivity rises as employment falls.
Experiment 5: Variety Going back to our hypotheses about variety and efficiency, we find a further confirmation of our hypothesis on the complementarity of increasing efficiency and increasing variety for economic development. Not only an adequate succession of new sectors can compensate for the growing inability of older ones to provide employment but faster entry of new sectors leads to a higher rate of growth of variety and to a higher rate of growth of employment. Starting from the sectoral shares displayed in Fig. 22, we calculated the varieties of each sector and the aggregate variety of the economic system by means of the
84
A. Pyka and P.P. Saviotti # of worker
productivity and employment development in a sector
0
250
500 productivity
750
1000
productivity
1250
sectoral employment
Fig. 23 Productivity and employment development in a sector
variety 2,40 2,00 1,60 1,20 0,80 0,40 0,00 0
500
1000
time
Fig. 24 Change in aggregate variety during the development of the economic system. A higher rate of learning than in the standard scenario has been used to display a greater number of sectors
informational entropy function. As we can see in Fig. 24, the variety of the economic system generally increases during economic development as a consequence of the creation of new sectors. However, there are short periods during which variety remains approximately constant or falls. These periods correspond to the conjunction of the decline of mature sectors and of the growth of emerging ones. As it was previously pointed out, the birth of a new sector is triggered by the saturation of a previous one. Such a condition, amounting to an almost perfect intertemporal coordination, is not necessarily present in real economic systems. It is possible for a new sector to emerge either before or after the complete saturation of a pre-existing one. In the former case we expect both employment and variety to
Economic Growth Through the Emergence of New Sectors
85
employment
25
15 0
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
1,8
relationship between variety and employment (k5 = 0.025)
2
2,2 variety
Fig. 25 The relationship between employment and variety
grow at a faster pace than in our results, in the latter we expect employment and variety growth to slow down. The latter case would be an example of poor intertemporal coordination in which the new sectors are not “ready” when required. In order to display a greater number of sectors in the calculations performed to obtain this figure we accelerated the process of development with respect to that of the standard scenario by increasing the rate of learning k14 (see experiment 2). Figure 25 shows the relationship between employment and variety. There is a general trend towards increasing employment as variety grows, but employment may fall during short periods, presumably when the rate of variety growth is lower. In fact, the periods of negative growth of employment in Fig. 25 correspond to the periods when variety is either growing very slowly or falling in Fig. 24. Thus, this figure seems to confirm the generally positive relationship between the variety of the economic system and the level of employment it can sustain. In order to further test the relationship between variety and employment creation we calculated dL/dt as a function of variety. Different values of variety were obtained by varying technological opportunity, the rate of learning and productive efficiency. The rate of creation of new sectors, and thus the rate of variety growth, is accelerated by increasing each of these variables, but by different mechanisms. Increasing the rate of learning accelerates the emergence of new sectors but leaves almost unchanged the maximum demand of each sector. Increasing technological opportunity accelerates the rate of creation of new sectors and the maximum demand in each sector. Increasing productive efficiency accelerates the rate of creation of new sectors but reduces the number of firms that can supply even an increasing demand. We can expect variety growth obtained by these different mechanisms to have different effects on employment creation. If we remember that in this model variety depends on the number of distinguishable sectors in the economic system, we can understand that a higher level of demand or a lower number of firms can lead to different employment levels at equivalent variety. Figures 26 and 27 show the effect of variety on the rate of employment
86
A. Pyka and P.P. Saviotti dL/dt 0,01
0,1
0,2
0,3
0,4
0,5
0,6
0,005 0,7
variety (av.)
Fig. 26 Effect of variety on employment creation. The changes in variety are here obtained by changing the rate of learning
dL/dt 0.012 0.01 0.008 0.006 0.004 0.002 0 0.25
0.26
0.27
0.28
0.29
0.3
-0.002
variety (av.)
Fig. 27 Effect of variety on the rate of on employment creation. The changes in variety are here obtained by changing k20, the rate constant for learning by doing
creation and Figs. 28 and 29 the effect of variety growth on the rate of employment creation. Both higher levels of variety and higher rates of variety growth have a generally positive effect on the rate of employment creation, except for the case in which variety is increased by raising productive efficiency. In this case the positive effect of variety due to the increasing number of economic activities corresponding to the sectors is more than compensated by the rise in productive efficiency. As a result of these experiments our hypothesis N 1 may need to be slightly modified. Variety growth is likely to be a necessary requirement for the continuation of long term economic development, and in most of the situations we
Economic Growth Through the Emergence of New Sectors
87
dL/dt 0.009 0.008 0.007 0.006 0.005
increasing k14 from 0.005 to 0.04 (step-size: 0.005)
0.004 0.003 0.002 0.001 0 0.0006
0.001
0.0014
dH/dt
Fig. 28 The effect of variety growth on employment creation. Variety is changed by changing k14, the rate of learning
dL/dt 0.018
increasing k13 from 25 to 250 (step-size: 25)
0.016 0.014 0.012 0.01 0.008 0.006 0.004 0.002 0 0.0009
0.001
0.0011
0.0012
0.0013
0.0014
0.0015
0.0016
dH/dt
Fig. 29 The effect of variety growth on employment creation. Variety is changed by changing k13, technological opportunity
explored it contributes positively to employment creation, but it is not a sufficient condition under all circumstances. It is still possible for productive efficiency to increase fast enough to compensate the positive effect of variety growth.
88
A. Pyka and P.P. Saviotti
Experiment 6: Inter-sector Coordination and Macro-economic Trends The basic features of the economic system simulated by this model seem to indicate the existence of an industry life-cycle (ILC). This life-cycle is essentially driven by competition. As a new innovation creates an adjustment gap, thus defining the scope of the sector, early entrants find them in a situation of temporary monopoly. As imitation induces entry, the intensity of competition increases, thus reducing further entry and eventually stimulates exit. The intensity of competition does not only affect the dynamics of one population, but also of the subsequent ones. The attainment of very intense competition in a population induces the creation of new niches, where the early entrepreneurs will again enjoy a temporary monopoly. The process of economic development simulated in this model involves a set of overlapping and interacting populations/sectors. Let us also observe that the lifecycle predicted by the model is the result of the balance of entry and exit, as determined by the adjustment gap, by the intensity of competition and by mergers and acquisitions, without any reference to either dominant designs (Utterback and Suarez 1993) or to the balance between product and process innovations (Kantz and Schreiber 1997). Thus although industry life-cycles can be created by different factors, including dominant designs and increasing returns to R&D, an ILC can also be created by the joint dynamics of competition and demand. This is a genuine prediction since our hypotheses on competition and demand cannot be expected ex-ante to produce a cyclical behaviour under all possible circumstances. In this experiment we investigate the effect of inter-sector coordination on macroeconomic growth paths. Above we have seen that the time of emergence of a new sector can be affected by the parameters of search activities. Inter-sector coordination can be measured by the delay between the creation of sector n and that of sector (n + 1). With an adequate inter-sector coordination the growing employment creating capacity of sector (n + 1) can be expected to compensate for the falling capacity of sector n. Thus, not only new sectors need to be created but they need to be created at the right times. Furthermore, we can expect a long delay between sectors n and (n + 1) to hinder the process of compensation, with employment falling in sector n before it recovers due to the influence of sector (n + 1). This process will interact with the duration of the ILC, since when there is a “long” ILC in sector n, macro-economic stability will be more tolerant of a delay in the creation of sector (n + 1). In principle the factors that influence the shape and duration of the ILC can be expected to interact with the inter-sector delay. Studying the joint influences of the factors affecting the ILC and of inter sector delays should enable us to derive the most favourable conditions for the emergence of a sustainable macro-economic growth path. We can expect that the shorter are ILCs the faster the economic system will have to create new sectors in order to provide a balanced growth path, which means one with a high trend rate of growth and with limited fluctuations. Figure 30 shows that the rate of growth of aggregate employment is considerably affected by inter-sector delays.
Economic Growth Through the Emergence of New Sectors
89
employment trends for different entry speeds of new sectors 100 90 80 70 60 50 40 30 20 10 1
101
201
301
slow entry
semi slow
semi fast
fast entry
401
t
standard
Fig. 30 Effect of inter-sector coordination on the aggregate employment growth path
trend
volatility and trend of different entry
fluctuations 1.25
0.2
1 0.1 0.75 0.5 0 0.25 -0.1
0 fast
slow trend
fluctuation
Fig. 31 Influence of the rate of entry of new sectors on the rate of growth and on the fluctuations of employment
Figure 30 shows that the rate of growth of employment rises systematically as the rate of entry of new sectors into the economy increases. Furthermore, in addition to affecting the slope of the employment growth path, inter-sector coordination affects also the fluctuations in employment arising from the patterns of growth and decay of different sectors. Figure 31 shows the combined effects of inter-sector coordination
90
A. Pyka and P.P. Saviotti
on the slope and on the fluctuations of the employment growth path. As we can see from Fig. 30, the rate of the employment growth rises with a growing rate of entry of new sectors but the fluctuations behave in a more complex way, increasing first, then falling and subsequently increasing again. Considering that a sustainable economic development is likely to require a high rate of employment growth but low employment fluctuations, we are tempted to say that the most sustainable economic development paths are likely to be found at the centre of Fig. 31. In summary, these experiments, although still limited, prove that inter-sector coordination can exert an important influence on the rate of growth and on the fluctuations of employment, and therefore on the sustainability of economic development. More precisely, a growing rate of entry of new sectors into the economy raises the rate of employment growth but has a more complex effect on the fluctuations of employment. To explore the conditions required to give rise to the most sustainable economic development paths requires further work.
Experiments 7: The Co-evolution of Technologies and Financial Institutions In the experiments described here, we assess the impact of changes in the values of the constants k3 and k2 on the creation of firms in each sector and on the rate of creation of employment. Figures 32 and 33 show the effect of varying k3 from 0.1 to 0.3 for k2 ¼ 1. It can be clearly seen that, for values of k3 superior to 0.1, the economic system’s ability to create new firms collapses. Given that economic development cannot be expected to proceed without the creation of firms, the previous result leads us to expect that the overall process of economic development will stop for k3 above a given threshold value. However, the threshold value of k3 rises with growing values of k2. In Figs. 34 and 35, we show the results of similar experiments for higher values of k2 (k2 ¼ 2), which means a higher availability of financial resources for all sectors. For this value of k2 strange fluctuations emerge for k3-values around 0.5. The meaning of this result is clear: the richer an economic system is in financial resources the Nit 50 40 30 20 10 1
101
Fig. 32 Number of firms (k2 ¼ 1; k3 ¼ 0.1)
201
301
t
Economic Growth Through the Emergence of New Sectors
91
Nit 50 40 30 20 10 1
101
201
301
t
Fig. 33 Number of firms (k2 ¼ 1; k3 ¼ 0.3)
Nit 50 40 30 20 10 1
101
201
301
401
301
401
t
Fig. 34 Number of firms for k3 ¼ 0.2 and k2 ¼ 2 Nit 50 40 30 20 10 1
101
201
t
Fig. 35 Number of firms for k3 ¼ 0.5 and k2 ¼ 2
greater its ability to absorb the effects of a bubble due to excessive expectations of returns by investors in emerging sectors. Conversely, the richer an economic system is in financial resources the easier it will be to find financial resources to invest in emerging sectors. Higher general financial availability (k2) accordingly allows for
92
A. Pyka and P.P. Saviotti
faster economic development, as single sectors can obtain larger portions (k3) of the available financial resources without detrimental effects to other industries. This result could be interpreted as implying the existence of increasing returns: the rich get richer. However, if there are increasing returns, they are increasing returns with caution. If the rich get richer, they are also most likely to create bubbles which will slow down their process of economic development. The emergence of strange behavior is confirmed also for employment creation. In this case as well, and for the same values of k3 and k2, the behavior of the system becomes strange as we raise k3 (Figs. 36 and 37). These results show that the behavior of the system changes markedly and becomes strange for values of k3 above a threshold value. This led us to suspect that the strange behavior could in fact be chaotic. We decided to carry out tests for the presence of chaotic behavior. It has to be pointed out from the beginning that these tests were designed to detect deterministic chaos generated by one equation. In contrast, the behavior of our aggregate employment 90 80 70 60 50 40 30 20 10 1
101
201
301
401
time
Fig. 36 Employment creation for k3 ¼ 0.1 and k2 ¼ 1 aggregate employment 90 80 70 60 50 40 30 20 10 1
101
201
301
Fig. 37 Employment creation for k3 ¼ 0.35 and k2 ¼ 1
401
time
Economic Growth Through the Emergence of New Sectors dWt+1/dt
dWt+1/dt
0.1
0.3 0.2
0.06
0.1
0.04
0 -0.02 -0.1 0
0.02 dWt/d
-0.2
0.1
-0.3
0 0 -0.02
0.5 0.4
0.08
-0.02
93
0.02
0.04
0.06
0.08
0.02
0.04
0.06
0.08
0.1 dWt/d
-0.4
Fig. 38 Phase portrait for employment creation (k3 ¼ 0.1, left, and k3 ¼ 0.2, right)
economic system is the result of complex interactions in a system of equations. Thus, in our case, the results of these tests cannot be fully conclusive but are in line with procedures suggested in non-linear econometrics (e.g. Dechert and Gencay 1992). To start with, we construct phase portraits, in which the behavior of the system at a given time is represented as a function of the behavior of the system at a previous time. For non chaotic behavior, the phase portraits are smooth curves (Fig. 38, left), while, in chaotic regions, they become strange and apparently inexplicable (Fig. 38, right). Since these phase portraits resemble those of time series characterized by deterministic chaos (e.g. Kantz and Schreiber 1997), we decided to perform some econometric tests estimating the Lyapunov-exponents which describe the sensitive dependence on initial conditions. Lyapunov-exponents below zero characterize systems with stable fixed points, whereas exponents larger than zero indicate chaos, in the sense of the unpredictability of the future despite a deterministic development (see Kantz and Schreiber 1997, chapter 5). The econometric tests are performed with the chaos package in R. For this purpose, we first have to simulate long time series (1,500 periods). For our standard scenario, k2 ¼ 1, k3 ¼ 0.1, we get a maximum Lyapunov-Exponent R ¼ 1.558, which excludes the possibility of chaotic behavior in this case. Figure 39 shows that the Lyapunov-exponent remains negative for a large number of iterations, which follow the neighbours of each point under consideration, i.e. the system remains within its trajectory. When we increase slightly, k3 (k2 ¼ 1, k3 ¼ 0.2), the estimated maximum Lyapunov-Exponent, becomes positive (R ¼ 2.62), indicating that our system shows now a behavior which resembles deterministic chaotic behavior (Fig. 40). We can gain a better understanding of the dynamics of the interaction between financial institutions and technologies by plotting the number of firms and financial availability as functions of time (Fig. 41) during the period in which the transition between the “normal” and the “chaotic” regimes occurs. For the fourth industrial sector this happens between t ¼ 308 and t ¼ 318. In Fig. 41 the bold curve represents the fourth sector, the normal curves represent the first, second and third sector respectively. We can immediately see that (1) the number of firms in all four sectors represented starts falling earlier and shows a more pronounced decline in the chaotic than in the “normal” regime (Fig. 41a, c); (2) financial
94
A. Pyka and P.P. Saviotti
Fig. 39 Lyapunovexponents (s ¼ 200) for k2 ¼ 1, k3 ¼ 0.1
-2
-3 R -4
-5 0
50
100
150
200
150
200
Time
Fig. 40 Lyapunovexponents (s ¼ 200) for k2 ¼ 1, k3 ¼ 2
2 1 R 0 -1 -2
0
50
100 Time
availability shows a much greater volatility and a much more drastic fall in the chaotic than in the “normal” regime (Fig. 41b, d). In fact, Fig. 41 shows that the immediate impact of the onset of chaos on financial availability is much faster and more pronounced than on the number of firms. At this point, to have a better idea of the changes in behavior we could expect as we varied kx and k2, we explore more systematically the parameter space of these two constants. The results are shown in Fig. 42. There it can be seen that, for low values of k2, system behavior becomes “chaotic” for very low values of k3, and that values of the Lyapunov exponents higher than 1 are only attained for k2 smaller than 2.5. In Fig. 42 we identify regions of parameter space where economic development can easily occur (negative values of Lyapunov exponents) and regions where it cannot (positive values of Lyapunov exponents). These regions can be
Economic Growth Through the Emergence of New Sectors
a
b
Stable scenario
35 30 25 20 15 10 5 306
95
Chaotic scenario
40 35 30 25 20 15 10 5 308
310
312
314
316
318
320
Number of firms
306 308 310 312 314 316 318 320
Number of firms
c
d
1.2
1.2
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
306
308
310
312
314
316
318
320
306
308
310
312
314
316
318
320
Financial availability
Financial availability
Fig. 41 Comparison of Ni (number of firms in sector i, a and b) and of FAi (Financial availability in sector i, c and d) in the transition periods between the pre-chaotic and the chaotic regimes
3.5 k3 3
LyapunovExponent
2.5 2-3 2 1.5 1
1-2 0-1 -1-0 -2-1 -3-2
kx 0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5 0.5
-4-3
Fig. 42 Emergence of “chaotic” behavior as a result of the joint variation of k3 and of k2. White regions correspond to normal behavior, while grey regions of increasing darkness correspond to “chaotic” behavior
represented as valleys and peaks, the former providing conditions conducive to economic development and the latter hindering it. Figure 42 implies that there is no unique development path which all economic systems will have to follow in order to develop but that there are ranges of conditions (corridors) within which
96
A. Pyka and P.P. Saviotti Lyapunov
variation of k3
2 1.5 1 0.5 0 -0.5 0
0.1
0.2
0.3
0.4
-1
0.5
0.6 kx
-1.5 -2 -2.5 1
2
3
Fig. 43 Variation of Lyapunov exponents as a function of k3 for different values of k2
economic development can occur and other ranges where it is very difficult or impossible. The finding is compatible with the observation that there are persistent asymmetries amongst countries for what concerns their output structure and the institutional configurations required to produce such output structures, asymmetries which underlie the concept of National Innovation System (Lundvall 1992; Nelson 1992). Figure 43 displays Lyapunov exponents from experiments varying k3 for different values of k2. The general result of these experiments confirms our expectations from observing the development of sector evolution as well as employment evolution, and shows that the region where the behavior of our economic system is prechaotic when we vary k3 becomes wider for higher values of k2, although the effect of k2 is not linear. So far we have proved that the behavior of our economic system can become “chaotic” as a result of the co-evolution of technologies and of financial institutions. An increasing intensity of feedback between technologies and financial institutions can accelerate the process of economic development, but beyond a given intensity, the system moves to a “chaotic” regime and loses its ability to develop further. A higher intensity of feedback occurs when, for a given differential rate of growth of the number of firms in sector i with respect to the average sector in the economy, financial institutions allocate a greater amount of resources to sector i. When such amount of resources, which is subject to a high uncertainty, is too low, the system will not achieve its full potential. If the amount of feedback is too high, financial institutions allocate an excessive amount of resources to sector i which at the same time means that financial resources are taken away faster from previous sectors i1. The amount is excessive if it cannot achieve an adequate rate of return, thus leading to the collapse of financial institutions and, in turn, of the whole system. In other words, the onset of chaotic behavior coincides with a bubble like behavior in which an overestimate of the development potential of particular technologies and
Economic Growth Through the Emergence of New Sectors trend of employment growth 0.04
97
variations of k3
0.02 0 -0.02
0
0.1
0.2
0.3
0.4
0.5
-0.04
0.6 kx
-0.06 -0.08 -0.1 1
2
3
Fig. 44 Trend of employment growth as a function of k3 for different values of k2
markets by financial institutions leads the whole economic system to an at least temporary collapse. “Chaotic” behavior is not good for economic development. What we still do not know at this point is how the behavior or our system changes in the pre-chaotic region. The results of further experiments carried out to explore this point are shown in Fig. 44. In Fig. 43, we can see that the range of values of k3 for which non chaotic behavior occurs becomes wider when k2 rises from 1 to 2 and to 3. We can also see that, in the pre-chaotic range, employment growth reaches its maximum value just before the onset of the chaotic regime and the consequent collapse of economic development. In other words, these results seem to indicate that the best conditions for economic evolution occur at the edge of chaos. This result can be interpreted by combining the uncertainty which affects the behaviour of entrepreneurs, uncertainty which they themselves increase, with the observation that economic development creates growing diversity but not growing disorder. Our observations on long term economic development can be better explained by the emergence of more complex but ordered structures rather than by a growing disorder. The very same concept of structure implies the existence of constraints amongst the components of the system. Hayek (1978) maintains that the creation of “order”, and not of equilibrium, is one of the main features of economic development. We can then expect the creation of new economic entities to contribute to economic development and their emergence to pass through different phases. In the very early and entrepreneurial phases we can expect higher than average growth rates and a greater propensity to fluctuations. As the sectors based on the new economic entities move from emergence to maturity we can expect growth rates to converge to the average for the economic system and volatility to fall. The presence and dynamics of co-evolution is quite coherent with a Schumpeterian interpretative framework.
98
A. Pyka and P.P. Saviotti
Concluding Remarks In this chapter we give a detailed introduction to our model of economic development driven by the creation of new industrial sectors. Each sector is considered equal to the population of firms producing a differentiated product. The dynamics of each sector is determined by the dynamics of the overlapping populations of firms corresponding to different sectors. The model dynamics is based on entry, as determined by the adjustment gap created by an innovation defining the sector, and by exit, as determined by the increasing intensity of competition and by mergers and acquisitions. The adjustment gap represents the size of the population of potential adopters of a given product that have not yet adopted. Alternatively, the adjustment gap can be considered as the part of a given market that is still empty or unexploited. In turn, the adjustment gap and the rate at which it is closed by the increasing production capabilities of firms, are influenced by the technological opportunities of different sectors and by their rate of learning. The dynamics of the emergence of new sectors depends not only on the creation of new knowledge and innovations, but by the inducements for the generation of new niches emerging within sectors that were once new and innovating, but that, due to an increased intensity of competition, become “saturated” and thus devoid of new opportunities for growth. An increased intensity of competition within one sector, as determined by the entry of imitators, constitutes the inducement to create a niche that could subsequently become a new sector. In this model competition for the firms in a sector does not come only from within the sector (Intra-population or intra-sector) but also from other sectors (interpopulation or inter-sector competition). The model leads to the emergence of a life cycle for each population/sector. The cycle in this case can be considered a competition life cycle (Saviotti 1998) since it is started by the temporary monopoly existing in the early stages and ended by the by the increasing intensity of competition in the maturity phase. The model thus has a very strong Schumpeterian flavour. This model can be considered a very simplified and stylised representation of how economic development is created by qualitative change, leading to a changing composition of the system. Given its simplicity, it already provides some very interesting analysis of the effect of changing composition on economic development. A very large number of experiments can be performed on the model to vary the relative values of the constants contained in it. We introduced to a small subset of experiments performed so far. Of course, these experiments do not exhaust the scope for exploration of the model and there are other variables whose influence ought to be analysed. We propose to do this in the near future. The stylised results of the basic scenario show a number of firms that first increases and then falls, a maximum sectoral demand that increases up to a constant value, a sectoral demand that increases up to a maximum, falls and then follows a gradually increasing path in the long term, a rate of mergers and acquisitions that first rises and then falls. On the whole the behaviour can be described as a life cycle driven by competition. Entry is essentially determined by an innovation defining the
Economic Growth Through the Emergence of New Sectors
99
sector, while exit is due to the increasing intensity of competition and by mergers and acquisitions. An aggregate view of this artificial economic system allows for computing total income and aggregate employment and gives important insights on the macroeconomic features of sectoral development. In summary, the model that we have developed is a dynamic model of growth involving qualitative change. Furthermore, it is a model of growth in which the aggregate output of the sector can be calculated by means of the outputs of individual units (firms or sectors). Of course, the model in its present form is still highly stylised. A number of modifications are required in order to make it more realistic. One strategy for future research is to improve the calibration side and to bring the model closer to empirical data. The second strategy we have in mind for future research is to leave the population level and transfer the model into an Agent-Based-Model (Pyka and Fagiolo 2007) which will allow us to further investigate the influence of heterogeneity of the actors’ knowledge and their decision mechanisms.
Appendix 1: List of variables Nit FAit AGit ICit MAit Dmaxit Dit Daccit Dispoit Yit DYit pit Incomet SEit Qit ucit Labourit Investmentit CSphysicalit CSedit hit HCit Wagesit HCit MCit TIit
Number of firms in industry i at time t Financial availability in industry i at time t Adjustment gap of industry i at time t Intensity of competition in industry i a t time t Mergers, acquisitions and failures in industry i at time t Maximum demand in industry i at time t Instant demand in industry i at time t Accumulated demand in industry i at time t Disposable income in industry i at time t Product characteristics in industry i at time t Product differentiation in industry i at time t Product price in industry i at time t Macroeconomic income at time t Search activities in industry i at time t Output in industry i at time t Unit costs in industry i at time t Employment in industry i at time t Investment in industry i at time t Physical capital stock in industry i at time t Accumulated investment in education in industry i at time t Quality of human capital in industry i at time t Human capital in industry i at time t Wages in industry i at time t Human capital in industry i at time t Marginal costs in industry i at time t Exploited technological opportunities in industry i at time t (continued)
100
A. Pyka and P.P. Saviotti
Table (continued) EXDit acit Total_Investmentt SEFt
Excess demand in industry i at time t Production adjustment in industry i at time t Macroeconomic investment figure at time t Fundamental research activities
Appendix 2: List of constants Constant k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13 k14 k15 k16 k17 k18 k19 k20 k21 k22 k23 k24 k25 k26 k27
Meaning Weight for the entry terms Overall financial availability Speed of adjustment of financial availability Weight of demand Scope for the development of new services Speed of the development of new services Scope for product differentiation Speed of product differentiation Weight for unit costs Wage adjustment Value of product services Value of product differentiation Overall technological opportunities Learning rate Degree of technological opportunities Speed of exploitation of opportunities Weight of competition term Ratio between inter- and intra-industry competition Weight for mergers and acquisitions Production efficiency Weight for search activities Weight of human capital accumulation Weight for physical capital stock Weight for human capital stock Weight determining labour creation Constant determining potential sectoral entry Weight for fundamental research activities
Value in standard scenario 10 1 0.05 10 1 0.25 1 0.25 1/500 0.01 0.01 0.01 10 0.025 2 0.125 0.1 1 0.01 3 0.000015 1 0.000005 0.0000005 1 50 100
References Aghion P, Howitt P (1998) Endogenous growth theory. MIT Press, Cambridge, MA Baumol WJ, Panzar JC, Willig RD (1982) Contestable markets and the theory of industry structure. Harcourt, Brace Jovanovich, San Diego Cornwall J (1977) Modern capitalism: its growth and transformation. Martin Robertson, London
Economic Growth Through the Emergence of New Sectors
101
Dechert D, Gencay R (1992) Lyapunov Exponents as a Non parametric Diagnostic for Stability Analysis. Journal of Applied Econometrics, Vol. 7, 41–60 Fagerberg J (2000) Technological progress, structural change and productivity growth: a comparative study. Working paper N 5/2000, Centre for Technology, Innovation and Culture, University of Oslo Fagerberg J, Verspagen B (1999) Productivity, R&D spillovers and trade. Working Paper N 3/ 1999, Centre for Technology, Innovation and Culture, University of Oslo Hayek FA (1978) Competition as a discovery process. In: Hayek FA (ed) New studies in philosophy, politics, economics and the history of ideas. Routledge, London, pp 179–190 Jovanovic B, MacDonald GM (1994) The life cycle of a competitive industry. J Polit Econ 102:322–347 Kantz H, Schreiber T (1997) Nonlinear time series analysis. Cambridge University Press, Cambridge Klepper S (1996) Entry, exit growth and innovation over the product life cycle. Am Econ Rev 86:562–583 Lundvall BA (ed) (1992) National systems of innovation. Pinter, London Nelson RR (ed) (1992) National innovation systems. Oxford University Press, Oxford Pasinetti LL (1981) Structural change and economic growth. Cambridge University Press, Cambridge Pasinetti LL (1993) Structural economic dynamics. Cambridge University Press, Cambridge Pyka A, Fagiolo G (2007) Agent-based modelling: a methodology for a neo-Schumpeterian modelling. In: Hanusch H, Pyka A (eds) The Elgar companion on neo-Schumpeterian economics. Edward Elgar, Cheltenham, pp 467–492 Romer P (1987) Growth based on increasing returns due to specialization. Am Econ Rev 77:55–62 Romer P (1990) Endogenous technical progress. J Polit Econ 98:71–102 Salter WEG (1960) Productivity and technical change. Cambridge University Press, Cambridge Saviotti PP (1996) Technological evolution, variety and the economy. Edward Elgar, Aldershot Saviotti PP (1998) On the dynamics of appropriability, of tacit and codified knowledge. Res Policy 26:843–856 Saviotti PP, Pyka A (2004a) Economic development by the creation of new sectors. J Evol Econ 14:1–35 Saviotti PP, Pyka A (2004b) Economic development, variety and employment. Revue Economique 55(6):1023–1049 Saviotti PP, Pyka A (2004c) Economic development, qualitative change and employment creation. Struct Change Econ Dynam 15:265–287 Saviotti PP, Pyka A (2008a) Product variety, competition and economic growth. J Evol Econ 18:167–182 Saviotti PP, Pyka A (2008b) Micro and macro dynamics: industry life cycles, inter-sector coordination and aggregate growth. J Evol Econ 18:323–348 Saviotti PP, Pyka A (2009) The co-evolution of technologies and financial institutions. In: Pyka A, Cantner U, Greiner A, Kuhn T (eds) Recent advances in neo-Schumpeterian economics – essays in honour of Horst Hanusch. Edward Elgar, Cheltenham, pp 81–100 Saviotti PP, Pyka A (2011) Generalized barriers to entry and economic development. J Evol Econ 21:29–52 Saviotti PP, Pyka A, Krafft J (2007) On the determinants and dynamics of industry life cycles. Presented at the EAEPE-conference in Porto 2007 Utterback JM, Suarez FF (1993) Innovation, competition and industry structure. Res Policy 22:1–21
.
Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key Kurt Dopfer
Introduction This paper attempts a fresh look at Schumpeter’s theoretical edifice. The purpose is not to give a comprehensive or complete account of Schumpeter’s approach; magisterial works providing exactly this already exist, such as those by Wolfgang Stolper (1994), Richard Swedberg (1991), Mark Perlman and Charles McCann (1998) and Yuichi Shionoya (1997). Instead, we investigate the theoretical corpus of Schumpeter’s economics with a view to its possible and actual influence on the construction of a modern Neo-Schumpeterian programme. We shall, on the one hand, briefly highlight the generic architecture of economics as inspired by Schumpeter’s work, and, on the other hand, discuss Schumpeter’s specific theoretical positions against this background. Turning to the latter, not only do we draw on Schumpeter’s theoretical work directly but we also try to achieve a deeper understanding of his theory by looking at the way in which he criticises competing positions, in particular those of classical and neoclassical economics. This will provide us with an idea of what Schumpeter thought a good theory to be. The main proposition of the paper is that Schumpeter launched a revolution along a trajectory from micro to meso, and, with less distinction, from meso to macro. It is argued that his inspiration was the introduction into economics – from the standpoint of contemporary economics – of a micro–meso–macro framework.1 With kind permission of Edward Elgar Publishing, Cheltenham, this text was first published in 2007 under the title “The pillars of Schumpeter’s economics: micro, meso, macro”. In Hanusch H, Pyka A (eds) Elgar companion to Neo-Schumpeterian economics, pp 65–77. 1
The concept of meso assumes an intermediate position in the distinction between micro and macro, and hence presumes that distinction. The micro–macro distinction became popular after the publication of Keynes’s General Theory, in which he demonstrated that the aggregates of individual decisions (micro) of a Walrasian or similar (neo)‘classical’ equilibrium was consistent K. Dopfer (*) University of St Gallen, St Gallen, Switzerland e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_5, # Springer-Verlag Berlin Heidelberg 2011
103
104
K. Dopfer
For the micro, Schumpeter put the energetic entrepreneur centre stage. Not only did he introduce the term “methodological individualism” but, more importantly, he drew a clear line of distinction between the neoclassical Homo oeconomicus, who only reacted to given opportunities, and the energetic innovative individual, who engaged actively in changing these opportunities. The consequences of this theoretical position were far-reaching. Sketched briefly, a novelty represents an idea that can be actualised by many agents. The theoretical body received, therefore, a qualitative element (an idea) and a numerical specification of its actualisation (a population). Thus micro cannot be aggregated into macro, since qualities cannot be added up and the individual agent has to be treated as a distinct member of a population. What emerges is a meso unit that gives micro its distinct position, and that constitutes the building block for the construction of macro. In this view, the course of formulating the theory is not from micro to macro but – with no short cut possible – from micro to meso, and from there to macro. Schumpeter’s major contribution was, as we shall see, in the theoretical exposition of micro and meso. He entertained a grand vision, as laid down, for instance, in his Capitalism, Socialism and Democracy (Schumpeter 1942), but he failed to provide a clear theoretical exposition that would show how the meso components of the economic system are dynamically coordinated, or how the “circular flow” is structured, and how, in this way, economic development occurred as a process of structural change. It is interesting that the body of contemporary NeoSchumpeterian contributions essentially mirrors Schumpeter’s original research programme. There is a wealth of important contributions on micro and meso but less so on macro, as it emerges as a complex structure from the dynamic interplay of micro and meso and as it changes incessantly over time “from within” (Hanusch and Pyka 2005).
with various states of the system when defined in terms of aggregates of other (macro) variables, in particular employment, income and money volume. The present-day proponents of the so called “new” classical macroeconomics view the problem differently, but the important point here is that the established distinction between microeconomics as dealing with Walras-type decision variables and macroeconomics as dealing with the aforementioned aggregate variables has survived, and is serving as a powerful taxonomic device and classifier for textbooks and teaching curricula in the discipline. This dichotomy did not exist at the time when Keynes was alive and when Schumpeter wrote his essay on Keynes. Schumpeter suggested using either the term “monetary analysis” or “income analysis” for what today is called macroeconomics, arguing that “(s)ince the aggregates chosen for variables are, with the exception of employment, monetary quantities or expressions, we may also speak of monetary analysis, and, since national income is the central variable, of income analysis” (Schumpeter 1952/1997, p. 282). It is evident that the usage of the terms “microeconomics” and “macroeconomics” is a mere convention, and that we could employ with equal vindication Schumpeter’s terminology, or a similar one, to denote appropriately the distinction between the two sets of variables. Evolutionary economists see no necessity to follow the conventional terminology and usually refer, when talking about microeconomic analysis, to firms, households or behavioural routines and, when talking about macroeconomic analysis, to the division of labour and knowledge or static and dynamic relationships between aggregate magnitudes. The term meso emerges as constituent concept, as we shall see, from an evolutionary perspective that defines micro and macro in this way.
Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key
105
Coordination and Change All sciences resemble one another in that they deal, on the one hand, with relationtships among elements and, on the other hand, with the behaviour of the whole over time. Economics is no exception, and, at its most fundamental, the questions of economics are how the economic activities of many individuals are coordinated and how the economy changes over time. The birth of modern economics in the second half of the nineteenth century was largely a response to two grand revolutions, and the general questions of coordination and change received a particular historical mark. The first revolution was a politicoeconomic one, and gave individuals high degrees of freedom in their operations. The founders of the discipline had a natural curiosity with respect to the theoretical treatment of coordination under conditions of a free, rather than regulated, market economy (as had prevailed in the ancien re´gime). The other revolution was technological-industrial. Epochal inventions, such as the steam engine and the mechanical loom, led to a path of unprecedented economic growth and broad structural change. Both the bourgeois-liberal and the technological-industrial revolution set the stage for economics as a modern science. In a metaphoric nutshell, economists gained interest in the “invisible hand” (Adam Smith) and, in the forces that changed by “creative destruction”, the economy “from within” (Schumpeter). The two great disciplinary questions provided the inspiration for various theoretical answers. From the point of view of the history of theory development, we can distinguish broadly between classical and neoclassical economics.
The Received Doctrines Dealing with Schumpeter’s assessment of classical and neoclassical economics, it is appropriate to recognise that he took his position to be a yardstick for the assessment of the work of others. He missed few opportunities to make it clear that a theory that failed to acknowledge the central role of the entrepreneur was fundamentally flawed. Using this lens, Schumpeter brought the works of the classical economists into particularly sharp focus. The proponents of the classical doctrine worked with aggregate resource magnitudes, and they proposed looking for objective laws in their relationships. The activities of individuals had no role to play in this objective machinery, and were at best epiphenomena, explained by, but not explaining, the aggregate relationships. Schumpeter, for one thing, objected to the view that all economic development was bound to terminate in a secular stationary state. In this way, David Ricardo and Thomas Malthus conceived economic development as a process whereby population increases led to decreasing marginal returns from agriculture, collapsing eventually into the stationary state of a secular subsistence equilibrium. This “dismal vision” enjoyed a revival in the works of the stagnationists of the times, who held – confirming the predictive conjectures of their
106
K. Dopfer
classical precursors – that “the capitalist system has spent its powers, . . . that our economy is, amid convulsions, settling down to a State of Secular Stagnation” (Schumpeter 1954/1986, p. 570). Contemporary authors such as Alvin Hansen failed, in Schumpeter’s view, to recognise that individuals had the power eventually to counter the alleged immanent objective forces, and that these could never force the system into a secular stationary state. Schumpeter’s objectivist critique was not targeted specifically at the stagnationists but included all strands of the classical canon. His critique did not concern the particular direction of the developmental course, or the differences in weight given to its determining factors, but merely the notion that economic change could be explained on the basis of objective laws. The nature of those laws was irrelevant; that is, they could be associated either with entropic forces or with new ideas and knowledge growth. For Schumpeter, the essential point was that development was always propelled by the “agens” of the entrepreneur, and that “in technical or organisational progress there is no autonomous momentum which carries in itself a developmental law, which would be due to progress in our knowledge. [. . .] There is no automatic progress” (1912, p. 480). It is impossible to understand Schumpeter’s disregard of Adam Smith’s work unless one realises that his criticism was not aimed at the categories of the proposed determinants as such but, rather, at their presumed objective nature. From his anti-objectivist platform, Schumpeter issued an indictment of several authors, such as Friedrich List, but the central target was Smith. There is “nothing original” in his writings, Schumpeter says, except that nobody, either before or after A. Smith, ever thought of putting such a burden upon division of labour. With A. Smith it is practically the only factor in economic progress. [. . .] Technological progress, “invention of all those machines” – and even investments – is induced by it and is, in fact, just an incident of it. . . Division of labour itself is attributed to an inborn propensity to truck and its development to the gradual expansion of markets – the extent of the market at any point of time determining how far it can go. It thus appears and grows as an entirely impersonal force, and since it is the great motor of progress, this progress too is depersonalised. (1954/1986, p. 188)
Schumpeter highlighted innovations as the central driving force of development, and Smith analogously emphasised the power of innovations unlike any other classical writer, but still no other economist of that strand had to suffer a comparable disregard. It was, arguably, precisely this close congeniality that prompted Schumpeter to take Smith’s work as an exemplar for demonstrating the essential difference between his and the classical approach.
Methodological Individualism Neoclassical economics ushered in a wind of change. In Schumpeter’s view, it introduced a major innovation by acknowledging that the individual agent was central to the formation of economic theory. Its pioneers, such as Le´on Walras, Stanley
Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key
107
Jevons, Heinrich Gossen and Vilfredo Pareto, understood that a proper theoretical account of economic phenomena was inconceivable on the basis of objective laws, but was bound to be premised on an understanding of individual cognition and behaviour. Schumpeter did not merely endorse this view but also made a significant contribution to its methodological underpinnings. Inspired by the writings of Karl Menger, he introduced into the project the concept of “methodological individualism” (Heertje 2004). He gave a term to what already united the neoclassical writers and what made them distinct with respect to their classical precursors. The question that arises is whether Schumpeter actually belongs to the neoclassical camp. After all, he is usually considered to represent a major heterodox figure of contemporary economics. A look at the origins of the concept provides us with the essential cue. The neoclassical economists set out to solve the problem of Smith’s “invisible hand”. Their problem was static, and Pareto’s construal of Homo oeconomicus was designed to serve this purpose. Homo oeconomicus only reacts to opportunities, but in no way changes them. Schumpeter’s theoretical problem, in turn, was not static, but dynamic. Homo oeconomicus was designed to solve the problems of static analysis, and, because it was successful in doing so, it proved inherently inappropriate for solving the dynamic problem. It is here that Schumpeter’s entrepreneur enters the scene. Methodological individualism can thus be interpreted as having two distinct components: one that deals with passive (reactive) individual behaviour, and another that deals with (pro) active individual behaviour. There is, in this way: Passive methodological individualism Active methodological individualism While Schumpeter did not introduce this distinction explicitly, he left no doubt in his writings that neoclassical economics was flawed because it featured only passive methodological individualism – ignoring its active counterpart. Schumpeter was not just an innovator with regard to the concept of methodological individualism; he also completed it.
Mesoeconomics This was only the beginning of the story, however. In Schumpeter’s interpretation, the active individual was not active simply in terms of ongoing operations under given conditions, but also – and decisively – in terms of changing these conditions. Most significantly, the active agent changes the conditions by introducing into the system a new idea. Naturally, not all ideas are economically relevant, and we must distinguish analytically between those ideas that are relevant and those that are not. We consider as relevant those ideas that can serve as a basis for economic operations. We call an idea useful for economic operations a generic idea or generic rule (Dopfer 2005; Dopfer and Potts 2007). A generic rule – say, technology – can, qua idea, often be actualised. The idea imposes no limitations on the frequency of
108
K. Dopfer
its physical actualisation. There is “one-ness” in the rule, but “many-ness” in its actualisations. The process of actualising a rule (Y) follows a distinct historic logic over time, with an inception (1/n, (n 1)/n), an unfolding (Y/n, (n Y)/n) and a termination (n/n, 0/n). There is, therefore, “first-ness” (1/n) in the adoption of a new rule. The first adoption of a rule (the innovation) must be distinguished from its first occurrence (the invention). Schumpeter emphasised in all his works that the major task of the generically active agent – the entrepreneur – lies in the carrying out of new combinations in the market and not in finding new ideas. The energetic entrepreneur is followed by a swarm of imitating, generically passive adopters (Y/n). The process settles down in a (temporarily) stable pattern of relative adoption frequency, whereby all adopters who wanted to adopt the rule have actually adopted it (n/n, and 0/n, if competing rule X is eliminated). We shall provide this abstract skeleton with some empirical flesh when discussing Schumpeter’s contribution based on the notion of the meso trajectory (section “Schumpeter’s Meso Trajectory”). At this juncture it is important to recognise that Schumpeter’s thoughts had a subversive nature, in that they were capable of challenging the foundations of economics. In abstract attire, there is one-ness of idea, and many-ness of physical actualisations. Ontologically, there is bimodality. This yields an elementary analytical unit with two distinct components. One is structural; the other processual. As idea, meso represents a component of a (macro) structure. It relates to other ideas in its mode of quality. As physical actualisation, meso unfolds in time (and space). The structural component, if expressed in a temporal context, must be conceived of as the process component stated in terms of the historical logic proposed. The macro structure is to be viewed, if actualised, as being composed of structurespecific process components. Neoclassical economics does not specify a comparable elementary unit for the theoretical elucidation of structure and process. Its mono-modality leads to a uniform micro unit that can be aggregated and disaggregated in its qualitatively unspecified quantities ad libitum. It lacks a qualitative specification (an idea or rule) and a numerical specification (a population of adopters). It can, therefore, serve neither as a structure component nor as a process component. At this point, the question may arise whether other, notably classical and Marshallian, approaches provide a meso unit with the stated properties. As it happens, the two approaches do indeed resemble the meso unit sketched. It is also the case, however, that they lack the essential features that relate to the bimodality of the construal, and as a consequence it is preferable to refer to them as quasi- or proto–meso approaches.
Proto–Meso: Classical Economics and Alfred Marshall Classical economics approaches meso with its concepts of natural and actual price. The natural price is the market price under “normal circumstances”, towards which the prices of all commodities are continuously gravitating.
Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key
109
Particular circumstances may keep the actual market price above the natural price. We may interpret this such that these particular circumstances represent the introduction of a novelty, and the entrepreneur has (as the monopolist) an innovation rent. The actual price would then differ initially from the natural price. Subsequently, there would be a tendency for the actual price to gravitate towards the natural price. This is a good approximation of what can indeed be observed in real economies. The classical economists interpreted this differently, however. They were prepared to regard factors such as natural disasters, governmental price regulations or organised monopoly power as particular circumstances causing a price deviation, but they did not make any systematic reference to technical (or other) innovations. The natural price represents a static datum, defined by the market form of competition. This market form is not itself an emergent property of a meso process. Furthermore, individuals are not introduced into the theory, and in fact are not required given the objective “law of gravity”. Nonetheless, the dynamics of meso can be explained only on the basis of a process of interactions among individuals, and not in terms of a commodity aggregate attracted by a centre of gravitation. The defects of the theoretical construct show up, essentially, in two ways. On the one hand, there is no explanation of the dynamics of market forms, which figured prominently in Schumpeter’s work. On the other hand, the classical model fails to tackle adequately major aspects of the meso process, such as diffusion, macroscopic adoption, selection and path dependence. Another important case of quasi-meso is provided by Marshall’s distinction between short- and long-run demand and supply schedules. Marshall introduced time into analytical processes, and showed how equilibria shift over time because of certain factors. These include economies of scale internal to an industry, demand shifts, and classical factors such as population and capital accumulation. Again, though, technological progress does not figure as a key factor. There is no systematic assumption about an initial innovation that evolves along a technological or other knowledge trajectory. A difference with regard to the classical canon exists, however, in that Marshall did employ methodological individualism. This provides an explanatory potential, but, again, when specifying the concept, he introduced the construct of the “representative firm”. An account of meso relies crucially on the premise of the heterogeneity of agents. Schumpeter’s distinction between the entrepreneur and the “statische Wirte” (e.g. managers) is an exemplar for this essential kind of heterogeneity in meso. As a consequence, Marshall failed to explain the meso process, and his analysis eventually drew on classical factors and the operant notions of elasticities and shifting schedules. There are objective determinants on the one side, and shifting quantities on the other, but no generic process. Marshall had an evolutionary vision, and, from everything that we know about his life, he was frustrated when he attempted to match it to his actual work.
110
K. Dopfer
Schumpeter’s Meso Trajectory The meso unit inspired by Schumpeter’s work constitutes both a structural component and a process component. The structural component must be brought into analytical perspective by relating it to other structural components and by then combining them into a structured whole. Schumpeter’s contribution to the integration of the meso component into a macro structure is discussed in the next section. Here we discuss Schumpeter’s contribution to meso as a process component. A novel idea or rule is viewed as being physically actualised along a three-phase trajectory. To facilitate the discussion, we may further subdivide each of the phases, and specify the trajectory on the basis of six (sub-)phases. Turning to the initial analytical outline, the three phases are: origination; adoption; retention. In the first phase, origination, the distinction is between the creation and the discovery of a new idea; in the next phase, adoption, it is between the initial and the many following adoptions; and in the terminal phase, retention, the distinction is between the stabilising and destabilising forces determining the generic rule regime. The six-phase dynamic was originally introduced as a schema for a comparative theory study that included neoclassical, Austrian and evolutionary/Schumpeterian economics (Dopfer 1993). In the following, the discussion is confined to the contribution that Schumpeter made to the theoretical elucidation of the six trajectory phases. These can be summarised as follows. I. Origination Sub-phase 1: creation of novel idea, innovative potential Sub-phase 2: search, discovery and recognition process, microscopic selection II. Adoption Sub-phase 3: first adoption, chaotic environment, bifurcation, uncertain outcome Sub-phase 4: macroscopic adoption of “seed”, selective environment, path dependence III. Retention Sub-phase 5: retention of adopted “seed”, meta-stability of actualisation process Sub-phase 6: existing regime as breeding ground for novel potential(s), link to phase I Schumpeter’s key contribution lies in his analysis of sub-phases 2, 3 and 4. The locus classicus of his analysis is phase II. In sub-phase 3 (the first stage of adoption) the entrepreneur puts a new combination into practice, changing the environment by initiating a new meso trajectory, which eventually gains momentum in subphase 4 (the second stage). The latter is, generally, a population process, which can be specified theoretically in various ways. Schumpeter focused on the dynamics of capitalist market forms, such as monopoly, oligopoly and competition, and
Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key
111
discussed their welfare and societal consequences. Neo-Schumpeterian economics has an explicit population core, from which diffusion, selection, path dependence and related models can be developed and the original market dynamic integrated. A further link is from Schumpeter’s adoption phase II to the second stage of origination (sub-phase 2), which displays entrepreneurial activities with regard to the search and discovery of new ways of doing things. The lacunae in Schumpeter’s work are sub-phases 1, 5 and 6. In all his work Schumpeter emphasised that it is not the creation but the carrying out of new ideas that is relevant for coping with the phenomenon of economic development. “There are always changes in an economy, and we are not closer to the exhaustion of possibilities today than we were in the stone age” (1912, p. 161). While this is a reasonable conjecture, it does not provide us with an appropriate micro foundation for a theory of a knowledge-based economy in which the creation of knowledge is a pivotal factor and requires theoretical recognition. The lack of explication for sub-phase 1 is a major theoretical shortcoming in Schumpeter’s work (Witt 2002). The second lacuna refers to sub-phase 3, which, essentially, deals with institutional factors. Schumpeter refers to institutions and related factors occasionally, as when arguing that habits, once “hammered in”, become “as firmly rooted in ourselves as a railway embankment in the earth” (Loasby 1999), but he fails to deal with sub-phase 3 systematically. Significantly, meso builds on the notion of circularity between individual and population. The trajectory dynamic unfolds not as a diffusion of a single valued variable but, rather, as a process in which individuals interact with an emergent population in a self-reinforcing way. Thorstein Veblen analysed this process on the basis of his concept of circular and cumulative causation. Schumpeter, instead, stressed the significance of the linear causality principle (Schumpeter 1912), which poses various problems when dealing with meso. Not only is a population an aggregate of individual behaviours but, frequently, it also becomes an (order) parameter that feeds back to individual behaviour. The application of the linear causality principle excludes a broad range of models subsumed under the term “path dependence”, and following this principle would narrow down the scope of a Neo-Schumpeterian programme drastically.
The Generic Architecture of the Economy While this critique may be justified, however, it cannot distract from Schumpeter’s principal merit: laying the foundations of meso. Given its bimodality, macro emerges in this framework as a two-storey structure. It is composed of a “deep” level, of ideas or generic rules, and a “surface” level, of their physical actualisations. The critical task of theory making in this framework is the translation of meso into the thus defined macro.
112
K. Dopfer
How did Schumpeter deal with the task of explaining coordination and change at the two levels? A theoretical account of the “deep” level of coordination and change refers, essentially, to the division of knowledge and labour. It is remarkable not only that Schumpeter explicitly rejects the essential message of the chapters 1 and 2 of Smith’s Wealth of Nations (section “The Received Doctrines”) but also that he largely fails to make any attempt to deal with the problem of coordination and change at the “deep” generic level. The “surface” level of the actual process can be associated with Schumpeter’s concept of “circular flow”. He borrowed this concept from the classical economists, and employed it in the first two chapters of his The Theory of Economic Development (Schumpeter 1912/1934). In the first chapter he provides an impressive tour d’horizon of classical theories and identifies a host of factors determining the system’s stationarity. He views stationarity as an equilibrated flow defined in terms of persistency in the parameter of generic variables and as recurrent resources flow. In the second chapter the entrepreneur enters the stage, destroying the equilibrium of the circular flow and propelling economic development. Schumpeter emphasises in general that economic development is a qualitative process involving structural change. The question arises, therefore, of how structure is dealt with in the circular flow that defines the macro state of the system, whether it be stationary or non-stationary (developmental). Having failed to integrate meso at the deep level, Schumpeter’s analysis has no counterpart at the surface level, where meso unfolds in the actual emergence of a new macro structure. Classical economists gave the circular flow a rich texture, with productive and consumptive activities embedded in the matrix of social classes. In contrast, Schumpeter leaves in limbo the structure of the circular flow. It remains basically unstructured. As a consequence, economic development can be viewed only in an implicit manner, as a process involving the destruction of existing economic structures, or as incessant structural change. There was, however, Walras’s general equilibrium theory, which Schumpeter ranked as the Magna Carta of economics as a discipline (Schumpeter 1952/1997). He referred to it on various occasions when dealing with coordination issues, but it was evident that a theory that treats the generic variables, such as technology and institutions, exogenously could not explain the coordination and change of these variables. It must be considered a major deficiency of Schumpeter’s work that he failed to furnish any immediate insights as to how the theoretical step from meso to macro is to be accomplished. He entitled chapter 7 of his Theory of Economic Development (1912) “The economy as a whole”, but nonetheless hardly addressed there the issue of meso trajectories as they combine, in a process of self-organisation, into macro structure and as they incessantly change “from within”. Instead of dealing with this issue further, he dispensed altogether with chapter 7 in later (including the Englishlanguage) editions.
Mesoeconomics: Bridging Micro and Macro in a Schumpeterian Key
113
Conclusion In mainstream economics, aggregation and disaggregation are mirror procedures – or, as Paul Samuelson says in his textbook, you can start either with micro or with macro as you see fit. In the Schumpeterian programme, meso is central. Meso serves as both structural component and process component, explaining generic structure and generic change. To rely in this programme only on micro and macro is rather like having Hamlet without the prince. Schumpeter made the cast complete by laying the foundations and by contributing theoretically to meso. Acknowledgements I gratefully acknowledge comments and suggestions by Georg D. Blind, Patrick Baur, Charles McCann, Stuart McDonald, Joseph Clark, Peter Fleissner, John Foster, Jason Potts, Andreas Pyka, Mike Richardson, Markus Schwaninger and Ulrich Witt.
Bibliography Alcouffe A, Kuhn T (2004) Schumpeterian endogenous growth theory and evolutionary economics. J Evol Econ 14:223–236 Andersen ES (1994) Evolutionary economics: post-Schumpeterian contributions. Pinter, London Brette O, Mehier C (2005) Veblen’s evolutionary economics revisited through the “micro-mesomacro” analytical framework: the stake for the analysis of clusters of innovation. Paper presented at the EAEPE Conference, Bremen, 10–12 Nov 2005 da Graca Moura M (2003) Schumpeter on the integration of theory and history. Eur J Hist Econ Thought 10(2):279–301 Dopfer K (1993) The generation of novelty in the economic process: an evolutionary concept. In: Dragan JC, Seifert EK, Demetrescu MC (eds) Entropy and bioeconomics. Nagard, Milano, pp 130–153 Dopfer K (ed) (2005) The evolutionary foundations of economics. Cambridge University Press, Cambridge Dopfer K, Foster J, Potts J (2004) Micro–meso–macro. J Evol Econ 14:263–279 Dopfer K, Potts J (2007) The general theory of economic evolution. Routledge, London Encinar M-I, Munoz F-F (2006) On novelty and economics: Schumpeter’s paradox. J Evol Econ 16:255–277 Fagerberg J (2003) Schumpeter and the revival of evolutionary economics: an appraisal of the literature. J Evol Econ 13:125–159 Foster J (2000) Competitive selection, self-organisation and Joseph A. Schumpeter. J Evol Econ 10:311–328 Hanusch H (ed) (1988) Evolutionary economics: applications of Schumpeter’s ideas. Cambridge University Press, Cambridge Hanusch H, Pyka A (2005) Principles of Neo-Schumpeterian economics. Discussion Paper no. 278, University of Augsburg, Augsburg Heertje A (2004) Schumpeter and methodological individualism. J Evol Econ 14:153–156 Heertje A (1988) Schumpeter and technical change. In: Hanusch H (ed) Evolutionary economics. Applications of Schumpeter’s ideas. Cambridge University Press, Cambridge Hodgson GM (1993) Economics and evolution: bringing life back into economics. Polity Press, Cambridge Holland JH, Holyoak KJ, Nisbett RE, Thagard P (1986) Induction. Processes of inference, learning and discovery. MIT Press, Cambridge, MA
114
K. Dopfer
Loasby JB (1999) Knowledge, institutions and evolution in economics. Routledge, London Mathews JA (2002) Introduction: Schumpeter’s lost chapter. Ind Innov 9(1–2):1–6 Metcalfe JS (1998) Evolutionary economics and creative destruction. Routledge, London Nelson RR, Winter SG (1982) An evolutionary theory of economic change. Harvard University Press, Cambridge, MA Perlman M, McCann CR (1998) The pillars of economic understanding, vol I, Ideas and traditions. Michigan University Press, Ann Arbor Rosenberg N (2000) Schumpeter and the endogeneity of technology: some American perspectives. Routledge, London Schefold B (1986) Schumpeter as a Walrasian Austrian and Keynes as a classical Marshallian. In: Wagener HJ, Drukker JH (eds) The economic law of motions: a Marx-Keynes-Schumpeter centennial. Cambridge University Press, Cambridge Schumpeter JA (1912) Theorie der wirtschaftlichen Entwicklung. Duncker & Humblot, Leipzig [revised English edition, without chapter 7, 1934; chapter 7, in English, in Industry and Innovation (2002) 9 (1/2): 91–142] Schumpeter JA (1939) Business cycles: a theoretical, historical and statistical analysis of the capitalist process. McGraw-Hill, New York Schumpeter JA (1942) Capitalism, socialism and democracy. Harper & Brothers, New York Schumpeter JA (1952/1997) Ten great economists: from Marx to Keynes. Routledge, London Schumpeter JA (1954/1986) History of economic analysis. Routledge, London Shionoya Y (1997) Schumpeter and the idea of social science: a metatheoretical study. Cambridge University Press, Cambridge Stolper WF (1994) Joseph Alois Schumpeter. Princeton University Press, Princeton Swedberg R (1991) Schumpeter: a biography. Princeton University Press, Princeton Winter SG (1984) Schumpeterian competition in alternative technological regimes. J Econ Behav Organ 5:287–320 Witt U (2002) How evolutionary is Schumpeter’s theory of economic development. Ind Innov 9(1–2):7–22
Coordination on “Meso”-Levels: On the Co-evolution of Institutions, Networks and Platform Size Wolfram Elsner and Torsten Heinrich
Introduction Until recently, the “meso” level, somehow located between the “micro” and “macro” levels of the socio-economy, has been neglected in economic theory. Although there is a long tradition of evolutionary-institutional economics which implicitly operates at this level, considering the formal and informal institutions that arise from direct interaction and direct interdependence of individual micro-level agents, it was not before the rise of formal evolutionary modeling in the recent decades that the discussion of explicit and quantitative models of “meso” economics has gained momentum. However, in all, the question as to why and how institutions emerge and become effective at some proper “meso” level has rarely been explicitly delved into and the basic logic of the size dimension, and particularly “meso” emergence has rarely been focused on. This is what this paper seeks to do. We discuss the emergence of cooperation as a social institution solving collective-good (dilemma) or coordination problems. Departing from a prisoners’ commonly known situation of non-cooperative game theory, i.e., the repeated prisoners’ dilemma, cooperating agents become able to coordinate with other cooperators only in a “meso”-sized group of cooperating agents – the institution’s carrier group – giving rise to a process of co-evolution of (1) the institution of cooperation, (2) a loosely connected network of cooperating groups and (3) a better performance of a population with extensive “meso” platforms. Smaller groups would tend to be more less effective if not irrelevant, while larger groups become too anonymous to sustain (institutionalized) cooperation. In this way, we may apply the theoretical considerations of this paper to “general trust” and macro-performance of some “small” (and highly industrialized) countries being “well-structured” into “meso”-sized arenas. We may develop an
W. Elsner (*) • T. Heinrich University of Bremen, Bremen, Germany e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_6, # Springer-Verlag Berlin Heidelberg 2011
115
116
W. Elsner and T. Heinrich
empirical research strategy when some “meso”-sized structures are already historically given, and their co-evolution with institutional emergence just means their further reproduction and development, generating high levels of general trust, cooperation, social capital and with this, finally, a high macro-performance. If “meso” size is relevant it also could be used even for political design to generate conditions for institutional emergence and thus high macro-performance. Starting with a review of the recent literature on the economics of “meso” (section “Size and ‘Meso’ Size in the Literature so far”), we proceed to analyze the evolution of institutionalized cooperation (section “Population Size in a Static ‘Single-Shot’ Perspective”) and its co-evolution with the “meso”-sized carrier structures as a supergame of repeated games, an entirely deterministic analysis so far (section “The Population Perspective, More Complex Mechanisms, and Minimum and Maximum Critical Sizes”). In sections “The Population Perspective and the ‘Minimum Critical Mass’: A Graphical Display” and “‘Loosening’ Connectivity: ‘Contingent Trust’, Monitoring, Memorizing, Reputation, and Selection: With some Numerical Examples”, we add stochastic considerations and arguments. Section “Insights from Computer Simulation: An Agent-Based Model” provides an agent-based computer simulation and in section “A Real-World Example: Trust Polls, Country Size, and Macro-Performance” we review some empirical example of the lastingly different macroeconomic performances of a small and well-structured country (Denmark) vs. a large ill-structured country (Germany) (thus referring to the “varieties-of-capitalism” literature). Section “Conclusions” summarizes our results and draws some conclusions on the potentials of an analytic “meso”-economics.
Size and “Meso” Size in the Literature so Far Some evolutionary economists have elaborated in recent years, in a NeoSchumpeterian perspective, on the ontology of “meso” in terms of “meso rules” and the process of their generation, adoption, diffusion and retention (Dopfer et al. 2004; Dopfer 2001, 2007; Dopfer and Potts 2008, Chap. 4). They have argued that, and described how, origination, adoption, diffusion and retention of a rule take place in a “meso”-sized group of carriers with a “meso”-sized population of actualizations of an ideal generic rule. However, they have not elaborated specific causal mechanisms by which “meso” comes into existence to solve specific problems. In addition to that approach, the present paper seeks to establish that the emergence, i.e. generation, adoption and diffusion of an institution, can be traced back to a specific but general problem which agents have continuously to solve both individually and collectively through that very process of institutional emergence. R. Gibbons has recently advocated “to bring interests back into our thinking about (. . .) routine production” (Gibbons 2006, 381; italics added), referring to the “folk theorem”: “building an equilibrium means that interests creep in; one cannot
Coordination on “Meso”-Levels
117
analyze just the evolution of beliefs” (p. 385). In fact, the game-theoretic approach is about a complex interest structure to be solved through mutual adaptations of behaviors and expectations becoming consistent. In the present paper, thus, we will explore a simple logic of the co-evolution of (1) a ubiquitous real world complex incentive structure, (2) “experienced” expectations (“to meet again”), indicative, in turn, and in varying degrees, of (3) the group size, and of (4) the institution as such (as outcome of the individuals’ successful efforts to improve their well-being in that typical interdependent decision structure). This might contribute to a general “meso”-economics wherein “meso” groups, “meso arenas”, or “platforms” in manifold socio-economic areas (regional, industrial, or professional clusters, networks, agglomerations, segregation and neighborhood structures, etc.) may become the theoretical locus of emergent structure. Coordinated (and cooperative) systems of such “relevant” sizes may have a specific capability of learning, innovative collective action, and thus eventually high macro performance, under different parameter configurations (for a large field application of different game structures in small-scale societies with different configurations, including group size as a critical factor, see Henrich et al. 20041). However, a heroic implicit presumption of most game-theoretic arguments is complete information. Agents are assumed to have a direct observable connection between actions and outcomes and thus intense incentives or pressure to learn both their social interrelatedness and there common interrelatedness with their natural environment. This transparency is rarely the case in reality where the direct connection of action to feedback and thus the pressure to learn typically is much weaker. Real societies, even “primitive” and small-scale ones, display a surprising variety of degrees of learned and institutionalized cooperation and reciprocity (see again Henrich et al. 2004) since mankind often is more detached and freed from evolutionary pressure rather than to immediately adapt and learn in an instrumental (problem-solving) sense than the game-theoretic argument under complete information (transparency) presupposes. Thus, empirically, even “meso”-sized and “primitive” groups sometimes may display low degrees of cooperation. They can afford certain levels of non-cooperation and conflict. Typically, then, the “backup capacity” of humans to improve their position even with low levels of cooperation implies exploiting the commons of nature and society where this does not immediately and transparently feed back to the individual agent. This might reflect a situation where institutionally coordinated solutions in an instrumental sense may have become “petrified”, “sclerotic”, ceremonially
1
Group size is there but one critical factor among others and interferes with other factors to form different interaction conditions and trigger different resulting degrees of institutionalized cooperation, although these real societies explored are all “small-scale” (ranging in size between 75 and 1,219). This, however, does not imply that relative smallness of interaction arenas, or “meso” group size, as such would not tend to be a favorable condition of institutional emergence. In fact, size was found in that large cross-cultural field experiment to be a good predictor for payoffs to cooperation.
118
W. Elsner and T. Heinrich
encapsulated (Bush 1987), or “locked-in”, in the course of their life cycles through status, power, hierarchy, and related myths and belief systems and collective action cannot be renewed to lock the system out again (for a classical model of merely technological “lock-in”, see, e.g., Arthur 1989; for the classic of institutional lock-in, see David 1985; and for potential lock-out, see, e.g., Dolfsma and Leydesdorff 2009). Many have paved the way for exploring critical size. For instance, there has been a renewed interest in T. Schelling’s (1969, 1973, 1978) early investigations in the emergence of coordination in attendance problems and in emergent spatial segregation (see, e.g., Vinkovic and Kirman 2006; Aydinonat 2007; see also already Elster 1989). Also, R. Axelrod’s 1980s approach to a quasi-evolutionary simulation of emergent cooperation and evolutionarily stable critical mass (incl. segregation) is still widely discussed (e.g., Axelrod, 1984, see the statistics in R. Dawkins’ foreword to the 2nd ed. 1984/2006), and the iterated PD is still much applied and elaborated on (e.g., Knudsen 2002; Devezas and Corredine 2002; Eckert et al. 2005; Goyal 2005; Traulsen and Nowak 2006; Ahdieh 2009; Mengel 2009; Konno 2010). W.B. Arthur’s “El Farol” attendance coordination problem (Arthur 1994) also has triggered research on coordination success and failure which implies a size issue. Game theorists have found some confirmation of the relevance of (“meso”) group size also in lab experiments (see, e.g., Yamagishi 1992). Generally, group (or “network”) formation processes have been a continuing issue (e.g., Demange and Wooders 2005; Page and Wooders 2007; Jun and Sethi 2009; Konno 2010; on general considerations of the interdependence of group formation and neighborhood structure in evolutionary game theory see e.g. Goyal 1996, 2005; Kiman 1998; Galeotti et al. 2010; Mengel 2009). The evolutionary dynamics in a PD, when controlling for a broad range of initial conditions and allowing for a variety and an ongoing generation of ever more complex strategies, has been developed far beyond standard PD-supergame equilibria or well-defined attractor solutions such as a clear-cut Axelrodian TFT dominance (e.g., Lindgren and Nordahl 1994; Binmore 1998; Foley 1998; Watts 1999, Chap. 8). Institutionally oriented game-theorists for game-theoretic oriental institutionalists, such as Schotter 1981, A.J. Field, S.P. Hargreaves Heap or E. Ostrom, have built bridges between game theory and evolutionary-institutional theorizing, and the size dimension has often played some role (for overviews of the issues, see, e.g., Dosi and Winter 2000; Ostrom 2007.) Also, many evolutionary-institutionalist economists have elaborated on institutional emergence and group or “network” conceptions of the individual (e.g., Hodgson 2000; Davis 2007, 2008). And some particularly have contended that institutions are “meso”, would emerge at some “intermediate” level, are most effective in “mid-sized” groups, etc. (e.g., van Staveren 2001, 179f.; for industries and regions: Elsner 2000). Group size has also been an obvious issue of the collective good problem since M. Olson’s (1965) Logic of Collective Action where the collective good has a better chance of being produced the smaller the relevant group which is constituted to generate the good (see, e.g., Dejean et al. 2008). Also, some have investigated “critical masses” in collective action along Olsonian lines considering large
Coordination on “Meso”-Levels
119
contributors to the collective good. The latter can either produce the good alone or mobilize a selected minimum producer group (see, e.g., Marwell and Oliver 1993). This, however, is not done in an evolutionary emergence perspective, thus what we pursue in this paper. If game theoretical arguments would be applied this would probably be a case for cooperative game theory. However, the question as to how and why institutions emerge and become effective at some proper “meso” level remains mostly unexplored. The basic logic of the size dimension that has rarely ever been addressed is what this paper will explore. Dosi and Winter (2000) have attempted an overview on evolutionary theorizing, games, and emergent processes and have concluded that there is no complex modeling without a proper qualitative evolutionary “process story” (see also Gruene-Yanoff and Schweinzer 2008). We will “embed” a simple model logic in proper process-story a frame below. We confine ourselves to some simple formalism and graphical illustration along Schelling’s and Axelrod’s lines and will add some model simulation. It has often been suggested that the “macro”-level, conventionally understood here as the national level of formal organization and public agency, has become less relevant in a (global) cultural emergence perspective. However, it is disputable whether it still is more appropriate to consider informal institutional emergence still under a “micro-to-macro” perspective and terminology rather than to conceptualize “meso” as a theoretical and practical socio-economic level of its own (for a “microto-macro” terminology, see, e.g., Hodgson 2000; Ayres and Martina´s 2005). We will argue here that there are considerable theoretical and empirical reasons to envisage a specific level of informal cultural emergence below, and sometimes perhaps across, conventional “macro” jurisdictions (which typically are the loci of enculturation through more formal and often ceremonial institutions). And since emergent structure is not reducible to its initial micro-components (i.e., agents of different kinds, heterogeneous gents) it is of course above the “micro” level. In all, what complex evolutionary-institutional theorizing, modeling or simulation, and real-world clusters, networks and all kinds of group cultures are all about may require a theoretical space of its own – “meso” (see also, e.g., Chen 2008, 121). Emergent structure typically is not a “macro aggregate” but a “meso” phenomenon. Thus, we define micro as the level of individual agents and their interactions. As soon as some “structure” (institution) has emerged that exists independently from any individual agent’s action, we understand this to belong to the meso level if also a “meso”-sized relevant interaction arena or “carrier group” has been co-evolving with that structure or can be identified. A meso-sized group, in turn, is defined as any “relevant” (i.e. actually emerged as cooperating) group of a size equal to or smaller than the larger whole population involved. Finally, if the “relevant” group can be shown to be smaller than the whole population the latter may be considered to belong to the macro level, mirroring perhaps the real-world “national” level as mentioned. Note that we will consider a population later that is “structured” only between (TFT-) “cooperators” and “defectors” (where this particular division structure will
120
W. Elsner and T. Heinrich
change with “emergent structure”), with no opportunity to leave this arena and population or migrate among several populations, i.e., no population of populations, which may be an important framework for other game-theoretic settings (see, e.g., Knudsen 2002; Traulsen and Nowak 2006).
Population Size in a Static “Single-Shot” Perspective The General Setting In the following, we will first present our argument in the frame of a simple static and deterministic “single-shot” logic. Then we will consider group size in a co-evolutionary process of institutional emergence and group constitution. In the context of “embedding” this logic in a “process story”, we will adopt a population perspective based on some motivational considerations where the portions of cooperating and defecting agents in the population become a stochastic variable at the outset. The evolutionary stable “minimum critical mass” and the (maximum) “relevant cooperating group” are identified, the latter immediately being the whole population at first. Memory, monitoring, reputation chains and (some) active partner selection then allow for a smaller minimum critical mass through faster increasing cooperative payoffs. They also allow for the “relevant” group size to increase above a “very small” group through loosening connectivity, i.e. “contingent trust” can be kept high then while the “relevant group” may increase, through better assorted individual peer groups. However, when those mechanisms wear out with increasing portions of cooperators the relevant group size may become a “maximum critical mass” smaller than the whole population. Finally, we will consider some recent application, namely persistent national differences in “general trust” and corresponding macro-performance, an interesting dimension of the topical “varieties of capitalism” issue. A strategy for empirical application of the theoretical approach and some first empirical indications of its relevance will be discussed. Collective-good and social dilemma problems are ubiquitous as practical everyday problems. There is in fact a collective-good problem involved in every single economic decision, even in the most simple supermarket purchase, but also in any more demanding technological coordination problem in the fragmented valueadded chain. If a full-fledged institution already exists with zero defection, then typically any agent actively contributes to the reproduction of the institution, and of the corresponding expectations of others, through cooperative behavior. However, if this is not the case and if an agent may expect another agent to behave in a cooperative way next interaction there is, under certain conditions a dominant incentive for him not to contribute. These conditions include, besides the payoff structure, the expectations “to meet”, memory, monitoring, reputation, related sanctioning and the danger of being rejected and excluded from many interactions.
Coordination on “Meso”-Levels
121
By not contributing he may take the opportunity of a potential short-run one-shot extra gain, for instance, by running away without paying, by somehow cheating, avoiding own costs, exploiting some commons, exploiting positive externalities from others, etc. For instance, under certain circumstances in the fragmented valueadded chain, the incentive to free ride, by saving R&D expenses and profit from incoming knowledge spillovers which are “inappropriable” by their creators, may become virulent. Similarly, even in a purely technological Arthurian random nettechnology choice problem, agents may be dominantly incited to free ride by waiting until others have made their decisions, in this way avoiding later regret (if they can afford to wait). Generally, agents in a more or less individualistic culture may be incited to defect in manifold ways, and will do so insofar as the situation is not fully governed by institutions (not considering formal hierarchical control). So any socioeconomic (trans-) action is embedded in a larger dilemma problem and will, or will not, contribute to the production or reproduction of the general frame of expectations which in turn allow for, or undermine, institutional emergence to overcome that basic dilemma. Along these lines, and according to a large literature of applied game-theoretic argument, we have argued elsewhere that any production, information and innovation system, under conditions of fragmented value-added chains (in face of complex integrated products), of (competing) net-technologies, and the collectivegood character of basic information, can be modeled as a system of mutual externalities, collectivities and cumulative actions, such that it can be reconstructed as a PD where any transaction is embedded in (Elsner 2005). However, the PD structure often exists only “in the background”, while the observable social surface is dominated by some solutions, i.e. some institutionalized arrangements. These may be “instrumental” (i.e. proper for ongoing problemsolving, which is the subject of this paper) or “locked-in” on an inferior technology (being also an inferior institution), or even completely mutually blocked through general free-riding and non-action. The following sections of the current paper will however focus on the formal perspective of the emergence of cooperation considering three different approaches, first the deterministic analysis of the problem of group size in the supergame of repeated games, second the population perspective considering stochastic elements and finally the analysis of further mechanisms of the emergence of institutional cooperation using computer simulation of a more complex agent based model. It should be noted that the term emergence in this context is not the same as what is described by “foundation” or “development”, it rather implies a dissipative (stable) structure (in this case the institution of cooperation in the “meso”-sized group) on top of a noisy micro-layer (in this case the level of individual agents): the inner working of the institution is independent of the individual agent; the institution would stay operative if some or all of the agents were replaced with other cooperators. Further, it is not so much the question, how the institution comes into being, but rather, how it gains stability. If the institution can be proven stable, we can deem its eventual development likely explaining it
122
W. Elsner and T. Heinrich
with random variation, experimentation induced by frustration with an inferior situation (of general defection), or others.
A Formal Sketch The simplest formal illustration of the static “single-shot” solution provides the logical condition for the superiority of cooperation over defection. Assume a; a b; d
d; b c; c
with b > a > c > d, and a > (d þ b)/2. As is well-known the payoffs P in a supergame for the TFT player always encountering another TFT player, and for an ALL D player encountering a TFT player, with d being the common discount factor, are PTFT=TFT ¼ a þ da þ d2 a þ ::: a ¼ 1d and PALL D=TFT ¼ b þ dc þ d2 c þ ::: c þ b c; ¼ 1d respectively. Cooperation then pays if PTFT=TFT > PALL D=TFT ; ! d > !ðb aÞ=ðb cÞ;
(1)
as popularized for instance by Axelrod (1984/2006). According to Inequality (1), cooperation (as an institution) may become logically possible in a single-shot decision, as an equilibrium of a PD supergame different from the “one-shot” Nash equilibrium. It is the perspective of an individual agent to decide before the very beginning of interactions which of the two strategies possible to choose. This, however, does not tell much by itself about the individual motivations to act and the process of emergence of a cooperative equilibrium. Therefore, we will consider some prerequisites and implications of this logical condition of institutional emergence often presumed only tacitly.
Coordination on “Meso”-Levels
123
The critical factors are the given quantitative dilemma-prone incentive structure, i.e. the quantitative strength or weakness of the collective-good problem, a, b and c, relative to the common discount factor, also interpreted in a supergame as the “probability to meet the same interaction partner again next interaction”, i.e., the importance of the common future (d). Cooperation will be the more superior the smaller the “opportunity costs of common cooperation” (b–a) in relation to the “opportunity costs of common defection” (b–c), and the larger the importance of the future. At first, note that d strictly applies within a supergame only. The supergame has an indefinite length, i.e. an indefinite number of interactions, and its end is surprising for the agents so that the infinite calculus above applies. However, this implies that within the supergame the two interaction partners will not change. Thus the classical discount factor applies where p1 is the probability that the next interaction takes place with the same agents and r is the usual time preference (“interest rate”). Let the discount factor applying within a supergame be d1: d1 ¼ p1 =ð1 þ rÞ:
(2)
In order to introduce partner change, the probability to meet the same again next round and, with this, population size we assume a “structured” supergame where, after an indefinite number of interactions, a round will (randomly) end and a new round will for each individual begin with random partner change (week total connectivity). That is we suppose a uniform periodization over time both within and between rounds. With uniform periodization, the first interaction of the next round takes place in the very next time unit so that a TFT player who has a memory length of one time unit (one interaction) will remember if he meets the same interaction partner again in the new round. This is where group size comes in. The agent will meet the same interaction partner again next round in a population of size n, with probability p2 ¼ 1=ðn 1Þ:
(3)
Obviously, the probability to meet the same agent again next round is rather small and decreases with an increasing population size.2 2
A formal note on memory and its implications may be in order here. The cooperative (TFT) agent is supposed to have a memory length of one time unit (t ¼ 1) which may considered to be equivalent to a required subjective “one-time-unit probability to meet again” perceived by agents. However, if cooperative agents were assumed to remember longer time spans, e.g., a memory of t ¼ T, the maximum population size allowed for the institutionalization of cooperation c.p. will increase, since the probability of meeting a certain agent again in any of the future rounds within their memory period T would considerably increase, given the population size. Put differently, by contrast, population size may increase with increasing memory periods while keeping the “probability to meet again”, or “cumulated probability”, constant. From n ¼ 1/p2,t ¼ 1+1 [see (3)], for instance, would follow the max. population size n ¼ 1/[1–(1– p2,t ¼ T)1/T] + 1. Obviously,
124
W. Elsner and T. Heinrich
A discount factor for a series of rounds, including the probability “to meet the same again next round” after random partner change, then can be defined as d23: d2 ¼ ½p1 þ ð1 p1 Þp2 =ð1 þ rÞ ¼ p1 =ð1 þ rÞ þ ð1 p1 Þ=½ð1 þ rÞðn 1Þ:
(4)
Again, p1 is the probability to meet the same again within a round, i.e. that interactions within the round will go on (with the same), (1p1) thus the probability that this will not be the case, i.e. around will (randomly) end, and thus (1p1)p2 the probability that, in case a round has ended, the agent will meet the same again next round. Applying the discount factor for a structured supergame (which mirrors the condition “to meet the same again next interaction and next round”)4 to both the cooperator’s and defector’s payoffs, we yield the following formulation of the single-shot condition: a=ð1 d2 Þ > !c=ð1 d2 Þ þ b c or a=f1 p1 =ð1 þ rÞ ð1 p1 Þ=½ð1 þ rÞðn 1Þg > !c=f1 p1 =ð1 þ rÞ ð1 p1 Þ=½ð1 þ rÞðn 1Þg þ b c:
(5a)
Focusing on population size we can determine the critical population size for the agents to be indifferent between cooperation and defection, thus marking the border (maximum or minimum values) of the variable settings (especially of the group size n) leading to cooperation: ncrit ¼ ð1 p1 Þ=fð1 þ rÞ½1 p1 =ð1 þ rÞ ða cÞ=ðb cÞg þ 1:
(6)
n increases with T. For example, for b ¼ 4, a ¼ 3, c ¼ 2, p2,t ¼ 1 ¼ 0.5, according to the single shot Inequality (1) the related maximum population size for t ¼ 1 would be 3. An increase from t ¼ 1 to t ¼ 2 then increases n from 3 to about n ¼ 4.4, for t ¼ 3 to about n ¼ 5.9, etc. It is obvious that memory length is a most critical factor for structural emergence. However, in this paper, we will deal with memory not in the frame of the single-shot logic but within the population perspective below when “connectivity” among agents will be loosened. However, memory will then work in the same direction. 3 We are aware that the two different logics of intra- and inter-round (or inter-supergame) calculations should be explicitly modelled, not only under the restriction of “meeting the same again” but with its various potential outcomes. The pay-offs of a supergame of supergames would be the various potential capitalized pay-offs of individual supergames. 4 The condition to meet the same again the perspective of the cooperator. The defector seems to be indifferent in his behavior to whom he will meet (he is assumed to always defect). However, he is not really indifferent since he would not wish to meet the same again but to meet a new cooperator every round whom he can exploit initially. Clearly the defector is interested in a small p2 or large population size n while the cooperator is interested in the contrary.
Coordination on “Meso”-Levels
125
As will be shown below, ncrit is a maximum critical mass for the relevant cases that may not be exceeded for cooperation to emerge. This condition holds the more (and population size may increase the more) the larger the payoff advantage of the cooperator over the defector (ac) relative to the initial advantage of the defector over his later payoff (bc).
The Co-evolutionary Adaptation of Population Size, Expectation, and Payoff Structure for Institutional Emergences Population size now may be considered a condition of the superiority of cooperation determining probabilities and expectations “to meet again”; but also the relevant cooperating population may be constituted, in turn, and its size explained and determined, under certain expectations, given the incentive structure and the striving of the agents to solve the problem (and to improve their long-run payoffs). Considering p1, n and the incentive structure as parameters, three out of four complexes of conditions will have to be determined in a co-evolutionary process: (1) population size, (2) consistent with given or emerging expectations, (3) successful institutional emergence, and (4) the quantitative incentive structure which may be parametrically changed, however, without dissolving the dilemma structure as such. For instance, if the problem structure (a, b, c) is given the maximum population size nmax, and minimum expectations d2min [or p1min, according to (6)] that still allow for successful institutional emergence will have to be determined in the very process of institutional emergence. Similarly, if the population size n and expectations are given, the dilemma structure (a, b, c) that still is consistent with successful institutional emergence would have to be determined in the process of institutional emergence. For example, if b and c are given, amin is to be determined; if a and c are given, bmax tbd., a, b, and c simulations by determined, etc. For instance, agents may determine the population size (e.g., through active partner selection as explained below) and somehow experience this population size, while recurrently solving the dilemma, at a given incentive structure. They may adapt the maximum number of partners with whom they are capable of interacting (given the incentive structure) in order to solve the problem, according to their subjective minimum requirements of “expecting to meet the same partner again”, in their efforts to create a solution superior to common defection. In the same vein, we may even consider the incentive structure being adapted according to the other conditions given, i.e. population size, expectations and the effort of problem-solving. For instance, incentives to cooperate (payments, rewards, awards, reputations, etc.) may be somehow increased or decreased by the agents in a myriad of interactions. Agents in this way may co-determine payoffs in the very process of their interactions, through contributions of various kinds. In this way, even the incentive structure may be considered endogenous.
126
W. Elsner and T. Heinrich
The Cooperative Area in a Numerical Run and the Limits of the “Single-Shot” Logic The condition for the emergence of cooperation in the fully deterministic singleshot logic (agents play repeatedly but choose their strategy only once in the beginning) is that a neutrally stable cooperative strategy (TFT) is available. “Neutrally stable” is a strategy that performs best against itself. It is not possible to displace an established “neutrally stable” strategy. The advantage of TFT over ALL D (both against TFT) is (ac)/(1d2), the disadvantage of TFT is (bc). The condition for “neutral stability” of TFT as given in (5a) therefore can also be written as b c !ða cÞ=ð1 d2 Þ:
(5b)
The crucial variables here are the relation of the payoffs (ac)/(bc), with 0 (ac)/(bc) 1, and the population size n, with the relevant sizes 2 n. For each (ac)/(bc) there is a critical minimum or maximum population size n that follows from (6). Whether ncrit is a maximum or minimum n depends on the positivity or negativity of three expressions in (6). If we lift for a moment the restrictions on n, d2, and (ac)/(bc), we may get a continuous two-dimensional space of n and (ac)/(bc) values (for given r and p1) that displays areas of defection and areas of cooperation (see Fig. 1 for r ¼ 0.05 and p1 ¼ 0.1).5 A subspace – marked in Fig. 1 – contains the permitted values for n and (ac)/(bc), while d2 increases with decreasing n, even above 1, which in turn marks a non-permitted area. Particularly, the values between n ¼ 1 and n ¼ (1p1)/(1 + rp1) + 1 (which is the n-axis intercept of ncrit) involve d2 > 1 and therefore are not part of the permitted subspace. The graph of ncrit[(ac)/(bc)] has two asymptotes, a horizontal one at n ¼ 1 and a vertical one at (ac)/(bc) ¼ 1p1/(1 + r). The maximum critical mass starts right above ncrit ¼ (1p1)/(1 + rp1) + 1 for very small (ac)/(bc) (for (ac)/(bc) ¼ 0, there is no cooperating population) and increases quickly towards infinity for rising (ac)/(bc) (given p1 ¼ 0.1 as said). Also, to be sure, to the right of the vertical asymptote, the area of cooperation continues for permitted values until (ac)/(bc) ! 1. The size of the area of cooperation obviously depends on the position of the vertical asymptote. The more this asymptote shifts to the left, the larger the area of
5
Note that p1 is to be interpreted as the probability that a round in the structured supergame as explained above will go on for at least one more interaction, and p1x therefore the probability that it will continue for at least x more interactions. The expected value of the number of interactions per round therefore is x ¼ logp10.5. For p1 ¼ 0.1, the expected value of future interactions is about 0.3. This is obviously a small number and a rather adverse condition for cooperation. Axelrod (1984/2006), for example, set some 200 future interactions, this implying a p1 of about 0.9965.
Coordination on “Meso”-Levels
127
n +9.0 +8.0
Irrel. Coop. Area
+7.0 Relevant Area
+6.0
Relevant Cooperative Area
+5.0 +4.0 +3.0 +2.0 +1.0 +0.1 –1.0
+0.2
+0.3
+0.4
+0.5
+0.6
+0.7
+0.8
+0.9
+1.0 (a–c) (b–c)
–2.0 –3.0 –4.0 –5.0 –6.0 –7.0 –8.0
Irrelevant Cooperative Area
–9.0
Fig. 1 Area of cooperation and area with relevant values (i.e., n > 2) in a Single-Shot Supergame Model With p1 ¼ 0.1 and r ¼ 0.05. Note: The graph of the ncrit function is drawn black, the asymptotes are fat grey lines, and the right-hand border of the area of relevant values [(ac)/ (bc) ¼ 1] is a thin grey line
dominant TFT will get. The position of the asymptote is (ac)/(bc) ¼ 1p1/(1 + r) and is governed by p1 (which is the parameter for the expected remaining number of iterations in the current round as mentioned in Footnote 6). In Fig. 1, the area of cooperation is relatively small, because p1 is set to p1 ¼ 0.1, a condition rather hostile to the emergence of cooperation. Nevertheless, we can see that for an arbitrarily set “meso”-sized population of, say, n ¼ 5,000, cooperation is feasible. Equation (6) yields the condition that (ac)/(bc) ¼ 0.9045. Thus, a rather weak dilemma in the incentive structure is required, given the rounds of continuous interaction (the degree of continuity in the social relations) are rather short, slightly above one interaction (again: p1 ¼ 0.1). The value 0.9045 would be given, e.g., with a ¼ 1,810, b ¼ 2,001, c ¼ 1 (d < 1). Obviously, as soon as we
128
W. Elsner and T. Heinrich
increase the length of round (increase continuity in the social relations) agents can c.p. deal with stronger dilemmas or larger population sizes to generate cooperation. Over all, the result is rather optimistic. Large areas of the permitted and relevant value combinations are areas with “neutral stability” of TFT, i.e. population sizes that allow for the emergence of cooperation, under given conditions. However, these positive findings are subject to two limitations: 1. As said, neutral stability means that TFT cannot be driven out of a population or, in our case, is statically the best ex-ante response – to a population that is expected to consist of cooperators. This population is therefore not exploitable by defectors. Nevertheless, this does not explain, how a cooperative population, or, in the single-shot case, the expectation of a cooperative population may come into existence. 2. The critical population size still is derived from a simple formulation of the “probability to meet again”, a fully deterministic perspective. Therefore we need to modify the model to include a stochastic population perspective, implying the possibility to engage in agency, and to investigate the possibilities of the initial development of a cooperative “culture”.
The Population Perspective, More Complex Mechanisms, and Minimum and Maximum Critical Sizes A Process Story of Institutional Emergence and Size The logic of expecting “to meet the same again” obviously is a theoretical straightjacket that restricts the process of institutional emergence to a narrow determinism based on an ex ante and once-for-all decision (“single-shot”) on strategy choice according to the probability to meet again in order to sanction and sanctify earlier behavior of others. The deterministic ex ante decision applies simultaneously to all agents in a population; a relevant cooperating group is always identical with the relevant population, i.e. no group within a population, no population “structured” by types of agents (strategies). Finally, it excludes agency. Thus, both some stochastic process, i.e. a population perspective, and agency need to be considered. The latter has to be overcome by “loosening” the total connectivity among agents. In a first step, we will lay down the theoretical frame in a short qualitative “process story”. Generally, it will be assumed that probabilities “to meet” and thus population structure and size can be experienced by the agents, as has been assumed so far, so that agents forms subjective expectations, i.e. probabilities for the future mirrored in the discount factor. First, considering the solution above as a sequence or process, the institutional solution can not come about through narrowly rational agents, i.e. short-run
Coordination on “Meso”-Levels
129
maximizers. It describes no process or mechanism to achieve this result with shortrun maximizers. These would, in a process, only be capable of generating a series of one-shot Nash solutions. Thus, as mentioned, an institution can only emerge through habituation. The institution will have to be a “semi-conscious” phenomenon, and may be so as long as expectations of conformity with it are met, supported, e.g., by a favorable numerical result of the inequality (5b), and the incentive structure and importance of expectations of a common future (to meet again) remain unchanged. Therefore, institutional emergence has to follow a broader and long-run rationality. [In contrast, the institution may be abandoned through a more or less deliberate consideration when a new (“rational”) single-shot calculation (after some condition has changed) no longer justifies rule conformity, when “surprise”, “disappointment”, “frustration” or “getting exploited” has occurred.] This broader rationality has also been elaborated under the perspective of a “horizontal” effect, i.e., individual (cognitive, planning) horizons need to be extended, in a world of complementarity, cumulativity, etc., if effective agency is to be gained (cf., e.g., Jennings 2005). Second, the institution, particularly an initial “minimum critical mass” of cooperators, may emerge on the basis of the individual motivations (1) to escape repeated frustration from common defection (from aspiring b and receiving only c), and (2) to learn and to increase knowledge, and particularly to explore what a different behavior, namely common cooperation, may bring about (“idle curiosity” or “instinct of workmanship” as T. Veblen would have coined it), to find a way to improve one’s economic situation (to gain common a’s rather than c’s). The payoffs for common cooperation may not even be known (incomplete information) and may then get explored by searching agents. The institution thus may emerge just out of an agent’s vision that there is more to be gained than repeated frustration. Agents who then make contributions to cooperation need to be imaginative, “explorative”, and creative. Therefore, agency needs to be carefully defined in an evolutionary process story. Third, the individual who then starts to search and experiment with different behavior will have to contribute repeatedly to the change of expectations in favor of cooperation. The process, thus, is cumulative in the sense that all agents must repeatedly and interactively (sequentially) contribute (or will have to cumulatively punish each other). Fourth, these agents have also to be risk-taking and not be too envious. The first to send a signal for a potential better common future will have to take the risk of being exploited, at least once. He also may never be able to compensate for this, as compared to the other, even if common cooperation starts in immediate response to his action. This agent thus needs to be mainly focused on his own net gain which he has to compare only with his payoff under continued common defection. Compared to this, he clearly will be better off over time. Fifth, with agents starting to learn, search, experiment, and diversify behavior (in our two-strategies world, this means starting cooperation), we may introduce a population perspective. Agents then can no longer exactly tell the strategy of another agent whom they will meet next. Behavior thus may be considered random,
130
W. Elsner and T. Heinrich
and agents will have to experience the “true” population shares. The “pure” expectation “to meet” will be replaced by the expected “probability to meet a cooperative agent next round” that we call “contingent trust”.6 Sixth, the evolutionarily stable initial “minimum critical mass” of cooperators becomes crucial. With a “minimum critical mass”, institutionalized cooperation may expand into a population initially consisting only of defectors. Seventh, while agents will have to experience “contingent trust”, they no longer remain focused on just (the “probability to meet”) the same agent. They will have to know about many agents as possible. Thus, more elaborated capabilities of agents have to be considered. Instances of such agency will be memory, monitoring, building and transmitting reputation, and some active partner selection based on the knowledge generated by these mechanisms. Eighth, group size of the relevant group then may be determined by the individuals through some active partner selection. Agents may in this way affect both the size and agent-type composition of their individual “peer group” of interaction partners. We may consider the “meso” group size then to be co-determined by the agents who actively adapt their “relevant group” to a maximum size and a composition which still allows them to contribute to institutional emergence, according to a given incentive structure and their contingent trust. Specifically, they may even enlarge their relevant “peer group” when they can increase their contingent trust sufficiently through some partner selection. Real life has properties that may serve as criteria of partner selection, i.e. proximity, or neighborhood, as either spatially, socially, or professionally defined. Agents then may confine themselves to some group of interaction partners through mobility, choice of localization, “social exclusion”, or just actively picking out cooperators, etc., in an effort to keep the expectation of cooperation high while solving the dilemma problem. It is a rationality of “smallness” of the peer group of an individual (sometimes with the danger, in turn, of too great a “cliquishness”, petrifaction or sclerosis of institutions attained, and subsequent institutional lock-in on an inferior path). Technically speaking, we then will “loosen” total connectivity of agents in favor of some “weaker” connectivity in the sense that the scale of interaction possibilities among agents becomes smaller (cf., e.g., Batten 2001, 89ff.; Watts 1999, 204ff.). Ninth, the system then adopts an endogenous dynamic with different equilibria (attractors). Not only relatively small “minimum critical masses” will be illustrated but also some specific dynamics of cooperators’ and defectors’ payoffs. This includes some tendency to destabilize the rule once it is established and has diffused among agents beyond the equilibrium size of the carrier group. The probability to gain more through unilateral deviation from the institution and exploitation, i.e. the incentive to defect, then may increase for individuals in a growing population of cooperators. These defectors, though, would not want to “kill the goose that is laying golden eggs” (Axelrod 1984/2006, 221). This may lead to an equilibrium 6 Note, however, that as soon as the minimum critical mass has been established this way, our system below will still behave deterministically when moving to its new equilibrium.
Coordination on “Meso”-Levels
131
“maximum critical mass”, being smaller than the whole population involved, with a certain number of rule breakers that the institution and its carrier group can afford without breaking down. An “embedding process story”, of course, does not sufficiently explain the logic of the process of emergence of “meso” size. Some further implications of the single-shot logic combined with its evolutionary embedding shall be explored in two steps, first the population approach, second agency.
The Population Perspective and the “Minimum Critical Mass”: A Graphical Display Behaviors may diversify, i.e. cooperation may occur now side by side with defection, motivated (in frustration experiencing, searching agents) as explained above. The number k of cooperative agents in a population n now becomes a critical variable, a stochastic variable causing a stochastic process at the outset, i.e. until a “minimum critical mass” is established, from where on the new equilibrium will be reached deterministically as will be seen. A single-shot calculation still applies. (Note that k has been zero or n in the fully deterministic logic as discussed so far.) In the following, the average payoffs per round for cooperation and defection will be mapped over k. Some k then will indicate the “minimum critical mass” of initial cooperators required for institutional emergence, i.e. the “minimum size of any coalition that can gain by abstaining from the preferred choice” (Schelling 1978, 218). Schelling’s illustration (1978, 217ff.; also applied, e.g., by Elster 1989, 27–44) of the case ALL C vs. ALL D illustrated the usual outcome of a PD one-shot Nash equilibrium: no individual incentive to cooperate and thus no mechanism for a viable cooperating group to emerge, at any k. This is equivalent to a very large population since, in both the cooperator’s and the defector’s perspectives, there is a zero “expectation to meet the same again” in the very large fully anonymous population and, thus, a zero expectation to be able to sanction or to be sanctioned. Therefore, a zero commitment resulting. It follows d1, d2 ! 0, since p1, p2 ! 0 and, therefore, n ! 1. This will serve as a benchmark in Fig. 3 below. Obviously, for k ¼ 0, the function of average payoffs for defectors will be fALL D(k) ¼ c, and for k ¼ n, the cooperator’s function will be gALL C(k) ¼ a. If all but one cooperate (k ¼ n1), the last defector left in this large population will always (and thus on average) gain f(k) ¼ b, while at k ¼ 1, the remaining cooperator will always (and on average) gain g(k) ¼ d. Note that f is not defined for k ¼ n and g not defined for k ¼ 0. In all, the payoff of a defector always exceeds that of an ALL C cooperator for any given k, hence no “minimum critical mass” beyond which cooperation will pay for an individual agent. This has also been Schelling’s (1978) benchmark configuration.
132
W. Elsner and T. Heinrich
In contrast, TFT is considered the prototype of a more realistic and less “pathological” (less “masochistic”) type of cooperation than ALL C. It sanctions and thus makes future expectation relevant, as reflected in the single-shot inequality. Using the general d2 of (4), the payoff functions will yield average payoffs according to fALL D ðk=nÞ ¼ ðk=nÞ ½c=ð1 d2 Þ þ b c þ ½ðn kÞ=n½c=ð1 d2 Þ
(7a)
gTFT ðk=nÞ ¼ ðk=nÞ½a=ð1 d2 Þ þ ½ðn kÞ=n½c=ð1 d2 Þ þ d c:
(7b)
and7
This will explain the conditions of evolutionary stability of TFT in an ALL D population. Note that this includes the earlier fully deterministic approach (the single-shot of a “structured supergame”) through d2 in a partially stochastic approach. According to Axelrod’s (1984/2006, Chap. 3) illustration, with d ¼ 0.9, b ¼ 5, a ¼ 3, c ¼ 1, d ¼ 0, this yields fALL D ¼ 14(k/n) + 10(nk)/n and gTFT ¼ 30(k/n) + 9(nk)/n, i.e. for gTFT > ! fALL D, a minimum of k/n > !1/17 or about 5.9% TFT-cooperators would be sufficient to allow for the institution of cooperation to emerge. Generally, this simple system has two stable equilibria and one unstable equilibrium: k/n will assume either zero or one or the “minimum critical mass” k*min (given n) that supposedly has been attained through “stochastic” behavioral diversification (search, experimentation, etc.):
ktþ1=n
8 < ðiÞ ¼ ðiiÞ : ðiiiÞ
if f ðkt Þ > gðkt Þ if f ðkt Þ < gðkt Þ kmin; t ; if f ðkt Þ ¼ gðkt Þ: 0; 1;
We will shortly discuss the simple logic of the population perspective with the expectation “to meet a certain agent next interaction”. For cooperation to become “evolutionarily stable” [case (ii)], the condition of (7b) > ! (7a) yields the minimum critical mass k*min or the critical portions (k/n)*crit: ðk=nÞcrit ¼ ðc dÞ=½ða cÞ=ð1 d2 Þ b d þ 2c:
(8)
This, then, can easily be depicted in Schelling’s graphics as illustrated in Fig. 2 below (see also, e.g., Elster 1989, 37–39, who, however, did not discuss a clear-cut analytical result in terms of population or group sizes). The cases are obvious8:
7
As said, this still represents a static and deterministic ex-ante single-shot calculation. Note that the agent who decides to cooperate or defect based on these equations will not be part of the relevant k’s and n’s, which is relevant for small k’s and n’s. 8
Coordination on “Meso”-Levels
133
Fig. 2 Illustration of the payoff functions for TFT vs. ALL D, depending on the population size (indicated by d2), yielding, in a “Small Group”, a “Minimum Critical Mass” for institutional emergence, k*SG, and the “Relevant Cooperating Group” at k*max ¼ n
(1a) In the very large population where d2 ! 0, when nobody cooperates (k ¼ 0), the defectors continuously receive c on average per interaction. However, when all others cooperate (k ¼ n1), the last remaining defector (the nth member of the population) will receive b, i.e. [(n1)/(n1)][c/(1d2) + bc], all the time since he will always meet another cooperative interaction partner whom he can exploit in each of their first encounters (which is the only interaction these particular two will have in that very large population). (1b) Accordingly, the first to cooperate in the very large population when all others defect (k ¼ 1) will continuously receive d, i.e. 0a + [(n1)/(n1)]d, on average per round since he will always be exploited in the first interaction with any defector where no further interactions with the same agent will follow in a round.9 In contrast, if everyone cooperates (k ¼ n), each will continuously receive a in every period. It follows that f(d2 ! 0, k ¼ [0, n1]) will always be above g(d2 ! 0, k ¼ [1, n]) in their common domain of k ¼ [1, n1]. (2a) The situation changes as the population gets smaller. In a very small population, when d2 ! 1, defectors will receive c on average per round when k ¼ 0, as was the case in the very large population. However, if all but one agents cooperate (k ¼ n1), this last defector will receive only slightly above c on average, since he will be able to exploit a TFT cooperator only once, then getting b, i.e. [(n1)/(n1)][c/(1d2) + bc] + 0[c/(1d2)], and from then on will get c with him (over all rounds in the super game. Note that, in the 9
Possibly a case of pathological behavior similar to an ALL C player.
134
W. Elsner and T. Heinrich
smallest group possible (two), he will encounter the same TFT agent again and again over all rounds, and the TFT will remember him from one interaction to another and thus also between rounds.) (2b) If only one cooperates in the very small population (k ¼ 1), the TFT cooperator will receive zero from common cooperation, i.e. 0a, but somewhat slightly below c, since he will be exploited once and then will continue with virtually the same interaction partner with c on average per interaction for both. And again, if all cooperate (k ¼ n), they will receive a, as in the case of the large population. As can be easily seen, the large population again yields the one-shot Nash situation, as in the ALL C case. In contrast, the very small group (SG) yields some minimum critical mass, k*SG, beyond which cooperation pays for all individual agents (see Fig. 2 below; note that all graphs go from k ¼ 0 to k ¼ n, for the sake of simplicity. We will be more exact in Fig. 6 below). The emergence, or creation, of the minimum critical mass may be a “motivational” problem as described, or an “organizational problem” (Schelling) (if we would refer to hierarchies or authorities), or might require some policy design (on this see below). The “relevant cooperating group” will be the whole population in this case as said, i.e. all defectors will switch to cooperation in a logical second. There is a maximum population size (at some d2) in which k*SG comes into existence, i.e. f(d2, k*SG ¼ n) ¼ g(d2, k*SG ¼ n). At this maximum k*SG ¼ n, the “minimum critical mass” and the “relevant cooperating group” equal the whole population size. With decreasing population size, k*SG will move, with a further increasing d2, from the right end of the n-axis to the left, stopping at some k*SG, as illustrated. With b ¼ 4, a ¼ 3, c ¼ 2, d ¼ 1 (or b ¼ 5, a ¼ 3, c ¼ 1, d ¼ 0), the calculation of d2 that allows k*SG to come into being at k ¼ n (i.e., (k/n) ¼ 1) yields d2 ¼ 0.5 (see for a numerical simulation, also Fig. 3 below, where we enter the relevant area at (k/n) ¼ 1 on the graph from above [i.e., (k/n) ¼ 1] at d2 ¼ 0.5). In the following, we will elaborate on the population perspective and build upon mentioned behavioral diversification, considering agency, specifically information gathering and selection capabilities. We start from “contingent trust”.
“Loosening” Connectivity: “Contingent Trust”, Monitoring, Memorizing, Reputation, and Selection: With Some Numerical Examples Contingent Trust In the population perspective, i.e. in a “structured” population with portions of (representatives of) different strategies, agents can no longer focus on “the one” interaction partner whom they know and need to meet again next interaction and
Coordination on “Meso”-Levels k/n +1.0
135
If b , a
If b , a
+0.9 +0.8
Relevant Cooperative Area
+0.7 +0.6 +0.5 +0.4 +0.3 Relevant Area
+0.2
Axelrod’s Solution
+0.1
–0.1
+0.1
+0.2
+0.3
+0.4
+0.5
+0.6
+0.7
+0.8
+0.9
+1.0
δ2
–0.2 –0.3 –0.4
–0.6 –0.7 –0.8 –0.9 –0.0
Irrelevant Cooperative Area
Fig. 3 Graph of the function of (k/n)*crit [For Axelrod’s (1984\2006) incentive structure, i.e., a ¼ 3, b ¼ 5, c ¼ 1, d ¼ 0]. Axelrod’s Solution d2 ¼ 0.9 in the lower right. Note: The graph of the function is drawn black, the asymptotes are fat grey lines, and the borders of the area of permitted and relevant values are at k/n ¼ 1 and d2 thin grey lines
next round in the single shot logic. Rather, they have to learn about many potential partners and the distribution of strategies among them, in order to increase their contingent trust dk in their inter-round transitions. While we assume agents to face random occurrences of partners with different behaviors, what counts now is their expectation to meet a cooperative partner next round, i.e. dk ¼ k=n
(9)
(similarly, e.g., Elster 1989, 34). Compared to the “benchmark knowledge” of only one interaction partner, the population perspective, therefore, increases the number of agents that the individual
136
W. Elsner and T. Heinrich
has to “know”. At least, he must get to know a representative sample of the population. Let us shortly consider which agency capacities specifically are required in order to increase this expectation to a sufficient level.
Memory and Monitoring First, knowledge of others will increase by adding some memory capability (a memory of one period only has been implied in TFT so far). Second, agents should be considered capable of monitoring concurrent interactions between identifiable third parties (see also, e.g., Elster 1989, 40f.). In the fully deterministic single-shot he could only “monitor” the same agent, so to speak. Assume six memory periods for own interactions and fewer memory periods for interactions among monitored third parties. The agent memorizes only his own interactions over his maximum number of memory periods, while the more “distant” third parties are from the monitoring agent the shorter the agent memorizes their monitored actions.10 Assume the agent can monitor ten other interactions. Thus, a monitoring-memory funnel may exist where the most “distant” monitored third-party interactions are memorized only for one period, i.e. if they have been monitored in t1, the next less distant only for two past memory periods, t2, etc., and the own interactions from t6. For a simple numerical example of information gathering on a “meso”-sized number of agents, see Fig. 4 for an illustration. Consider, for example, agents endowed with the ability to monitor, for example, ten other concurrent interactions at a time. Agent A then will “know about” 21 agents excluding himself from current monitoring. From memory, he would know about 21 more from t1, 17 from t2, 13 from t3, etc., in sum 87 other agents. t0
X--Y
X--Y
X--Y
X--Y
X--Y
A--Y
X--Y
X--Y
X--Y
X--Y
X--Y
t-1
X--Y
X--Y
X--Y
X--Y
X--Y
A--Y
X--Y
X--Y
X--Y
X--Y
X--Y
X--Y
X--Y
X--Y
X--Y
A--Y
X--Y
X--Y
X--Y
X--Y
X--Y
X--Y
X--Y
A--Y
X--Y
X--Y
X--Y
X--Y
A--Y
X--Y
X--Y
X--Y
A--Y
X--Y
t-2 t-3 t-4 t-5 t-6
X--Y
A--Y
Fig. 4 A monitoring-memory funnel: illustration
10
We will consider a topology concrete geographical or social, i.e. neighbourhood, below in section “Insights From Computer Simulation: An Agent-Based Model”.
Coordination on “Meso”-Levels
137
However, there is no perfect mobility in that monitored and memorized part of the population. Assume that in addition to A all Xs stand for immobile agents with fixed positions in the neighborhood of A who appear repeatedly while only 50% of the monitored and memorized part of the population are changing all the time; assume for the sake of simplicity that these are the Ys. In this way, A knows only about 58 different other agents. Also assume 50% observed cooperative different other agents (i.e. dk ¼ 0.5) (or actions, resp.). The agent’s “knowledge” of (the behaviors of) cooperative different other agents then is 29. Compared to the “benchmark” of “the 1” he knows (and only needs to know) in the fully deterministic single-shot logic, thus the relevant population size which was 3 at p2 ¼ 0.5 in the logic so far, has increased to 59, and the number of cooperators from a maximum of 3–30 (including A), i.e., by 10.
Reputation Chain Third, reputation may further increase the “knowledge” about other agents, i.e. it further increases the probability to meet a partner next round whose (earlier) behavior the agent knows, in this case through a third person (for a similar approach and evidence from experimental economics see Kollock 1994). Assume, for example, that each agent has a monitoring-memory funnel that includes 29 known cooperative different other agents as above, and each agent can ask all of these cooperators about the behavior of third agents (being outside of his own funnel but inside their funnels). Assume further, for the sake of simplicity,11 that only half of them have own funnels that differ from A’s funnel. Thus A may get new information only from 14 known cooperators about 28 new third (different other) cooperative agents. In this way, the number of other agents “knowable” by an agent will multiply and sum up over the number X of steps of the chain according to k ¼ Sx14x28 with x ¼ 0, 1, . . .,X, which, for instance, would yield around 77,000 known agents in only three steps of the reputation chain. This would definitely have to be considered “meso”-sized if these agents would constitute a “relevant cooperating group”. Of course, the reputation chain will not work with the same effectiveness across the different steps or links of the chain (the “neighborhood” geography), but rather may get “thinner” and “weaker” with increasing distance from the interrogating agent. Information feedback to the original agent will be increasingly incomplete 11
A “half-distant” neighbor’s funnel, of course, will not completely be identical with A’s funnel but will overlap so that he might add, say 10 or 20 or more% information (depending on his distance) to A’s knowledge from his own funnel. Specific numbers would require a more elaborated model of overlaps of funnels in a topology. We will this and simply assume that only 50% of cooperators in A’s funnel can add information, these, however, may provide information from the complete set of their assumed 28 different other third cooperative agents.
138
W. Elsner and T. Heinrich
and subject to leakage. Even if we assumed a considerable leakage of 50% per step, we would yield 14·7·3.5·28 9,600 agents which also may be considered to reach into “meso” sizes. Also, the set of agents may “bend” into a “spherical” topology so that more remote agents may increasingly be already “known” ones. In this way, it may easily be that k may reach some limit at any n. In all, this should illustrate that these agency mechanisms, i.e. active acquisition of information about others, may easily lead us into a “meso”-size scale of populations of “known” agents and “known” cooperators, and with this into the individual experience of the “true” composition of the whole population. No further devices and mechanisms such as signaling and substitute indicators such as sex, race, age, living area, formal education, certifications, identity cards, corporate uniforms, appearance, ceremonial behaviors or other signals would be needed. Note, however, that this is just a first intuitive understanding of a “relevant meso-sized cooperating group” while we develop a theoretical understanding below. Note also that we have to assume “correct” reputation here, i.e. an information requested and received by A from B on a potential interaction partner C is not subject to some “strategic” tampering in favor of B. Finally, if such verbal communication comes into consideration, this is something that non-cooperative game theory basically would render senseless and ignore (the “cheap talk” issue).12 “True” verbal communication itself can of course not be explained through the same process and logic of institutional emergence that is investigated here. Qualitative theorizing on institutions and multi-level systems of reference among norms and institutions, of course, provides more opportunities to go beyond such limitations (cf., e.g., Hodgson 2006) and cooperative game theory might have a major role to play here. However, why and how should a certain maximum percentage of cooperators above the critical minimum mass (which comes into existence through another agency mechanism, as explained) come into existence in the population? “Loosening” the connection between expectation, namely contingent trust, and population size through active partner selection will help to understand group size adaptation.
Adapting Group Size Through Partner Selection and a “Maximum Critical Mass” Smaller than the Whole Population: A Graphical Display Monitoring, memorizing and using a reputation chain can be considered informational preconditions for selecting agents. Assume that interaction partners appear 12
This might be a point for a complementary use of cooperative game theory. We owe the advice on cooperative game theory an anonymous referee and will denote the logical points of its complementary use in two more places in our argument.
Coordination on “Meso”-Levels
139
in some random sequence subject to “distance”, i.e. in a neighborhood topology the probability of appearance of potential interaction partners to the individual decreases with decreasing proximity. But also knowledge about agents decreases with decreasing proximity as discussed in the reputation chain. Assume further that in the process of search, experimentation and behavioral diversification motivated as described above, cooperators start to select agents (on such agency capacities, see, e.g., Stanley et al. 1994; Kiman 1998; Davis 2007, 2008; Dolfsma and Verburg 2008; Leydesdorff 2007). It is supposed that the cooperative agent has some capacity to reject interaction with a known defector, say, a capability to reject on average every second defector he encounters. Cooperators also may prefer to interact recurrently with agents they cooperated with in the past rather than with known defectors or new unknown agents. The size of the “relevant cooperating group” thus may be determined by myriads of acceptances and denials of interactions. Lab experiments have shown that agents indeed try to reduce net complexity through active selection and active building of neighborhoods (see, e.g., Harmsen-van Hout et al. 2008), and that network effects can informally be attained by groups constituted through the selective interactions of individuals (e.g., Tucker 2008). Spiekermann (2009), for instance, has recently shown that a selection mechanism (i.e. choosing or excluding/rejecting partners) allow may already allow for the emergence of cooperation in a n-person public-good game. However, again, size way not an explicit issue in his analysis. In this way, the cooperator will be able to increase the ratio of cooperators in his individual interactions. Individual i thus will be able to increase (ki/ni) of the group of his past and present interaction partners and in this way increase dki" ¼ (ki"/ni) while the surrounding population remains the same size n. He decouples dki from the general dk existent in the population, namely dki > dk (note that he is not assumed to be able to reject every defector). Therefore, he will also increase his average outcome in a population of given size and structure, and cooperators together can make cooperation increasingly more attractive. Particularly, through selection cooperators can change their payoff curves and realize higher growth rates than in the linear case of the simple population curve above which is dependent on dk rather than dki (7b). Since cooperators’ payoff cannot exceed a we have to assume a degressively increasing payoff function. With a perfect selection capacity, the curve would start at d at k ¼ 1 and jump to a with k 2, i.e. cooperators would always interact only with cooperators. Since selection, however, is not perfect the curve will increase more slowly as will their dki, in spite of the fact that they may become more in the whole population through both initial experimentation and the increasing attractiveness of cooperation (through that very selection). Specifically, we may assume that the effectiveness of the partner selection declines because the knowledge about agents does not grow by the same rate as new (potential) interaction partners occur. Memory and monitoring provide knowledge about a limited number of agents and the reputation chain mechanism fades
140
W. Elsner and T. Heinrich
away with distance as said. So the more new partners occur and the greater the distance they come from the less the agent will already know about them. Overlapping individual partner selections may then constitute the “relevant cooperating group” [see also, e.g., Oestreicher-Singer and Sundararajan 2008; for minimum relations and networks that still allow for institutional diffusion (“small worlds”) and segregation of a population, see, e.g., Foley 1998, 18ff., 38ff., 61ff.; Watts 1999, 204ff.; Batten 2001, 89ff.]. In all, the function thus may increase according to some factor a, 0 < a>1, connected to ki/ni, i.e. (ki/ni)a. Defectors, on the other hand, will be sorted out to some degree by the early cooperators and thus will not fully profit from an increasing number of cooperators as in (7a). Their population payoff curve thus will remain below the curve given by (7a) and cooperators’ payoff curves may rise above the defectors’ payoff curves already for a small number of cooperators. However, while cooperators cannot get above a, defectors will still increasingly profit from a growing number of cooperators. Their payoff function may be considered progressively increasing, subject to (k/n)1/a. This may constitute a maximum critical mass, or relevant cooperating group below the size of the whole population. A Maximum Critical Mass in order to theorize and illustrate the simple logic of such segregation, different non-linear cooperation payoff curves have often been considered in the literature. For instance, Schelling (1978, 104f., 239ff.) has referred to net-externalities to explain progressively increasing cooperative payoff functions. This implies some additional positive mutual externalities among cooperators, perhaps even above what cooperators generate on average in the standard PD (i.e., a). Schumpeterian innovation economists have argued in favor of the cumulative character of new knowledge, both inter-personally and inter-temporarily, so that cooperative payoffs may increase through an extra synergetic effect (through mutual learning, imitation, etc.) (see, e.g., Pyka 1999, 98ff.). Specifically, S-shaped curves have been considered in such contexts. For instance, in his technology choice model with increasing returns (i.e. net-effects), Arthur (1989, 123ff.) has made use of logistics curves. While a technology adoption function maps the probability of choice of a certain technology by the next choosing agent against the number of those who have chosen this technology so far, i.e. equivalent to a payoff function depending on the number k who have “chosen cooperation”, some “improvement function” would mirror additional increasing returns to adoption. Particularly, he considered cumulative learning (by using) effects and “coordination externalities” (p. 126). He also considered a “bounded improvement function” when effects become “exhausted”. The population would split up then, in an equilibrium, with the coexistence of more than one technology, one portion using the “dominant technology” another one some “minority technology”. Also, Cooper and John (1988) elaborated on economies with “strategic complementarities”, or synergies. Going beyond “positive externalities” of agent
Coordination on “Meso”-Levels
141
A which just increase the payoff of agent B, “synergies” imply that an “increase” in agent A’s strategy increases the marginal return of agent B’s own action. A’s strategy thus is an increasing function of agent B’s strategy, and vice versa. A “synergetic” reaction function displays some “multiplier effect” and is be S-shaped (pp. 445ff). It can be matched against a benchmark of a linear curve representing a series of Nash equilibria. This involves multiple equilibria at the intersections of both curves. The higher intersection basically is equivalent to a cooperative equilibrium. The economic examples that Cooper and John discuss include netexternalities through coordination in supplier networks and demand coordination among multiple industries in the business cycle. The sigmoid function reflects the idea of initially exponentially growing payoffs for cooperative behavior (cumulative learning, “synergies”, net-externalities, and the like) which later ends in maturity, when the “resource of cooperation” somehow becomes “exhausted” (see also Elster 1989, 28f., 32–34, for a logistics curve, while discussing the “technology of collective action”). While the composition and “contingent trust” for the whole population may remain relatively low cooperators increase the composition of their own interaction partners (ki/ni) and thus may increase their average payoffs above the level of the average payoff of defectors through selection. This in turn will increase the portion of cooperators in the population and contingent trust. With degressively increasing cooperators’ payoffs and progressively increasing defectors’ payoffs this will occur at a relatively small “minimum critical mass” compared to the linear case of payoff functions above (7a and 7b). Also, sorting out (to some degree) one’s interaction partners is equivalent with the assumption that cooperators “self-commit” to interact with “their kind”. Or: self-commitment to interact a certain number of rounds among cooperators is equivalent to some capability to select partners. Here, Axelrod’s use of d applies. He focused on the number of interaction rounds among agents with a somehow committed ongoing relationship. If two agents had a given (expected) round length of one interaction, the probability that the interrelation will end is 1.0; accordingly the (expected) probability to “meet the same again” is d ¼ 0 (i.e. p1 and d1 according to our definitions would be zero, one-interaction rounds; if this applied also to the inter-round probabilities p2, also d2 became zero; this is equivalent with the very large group). With two interactions in total, the probability that the interrelation (the round) will end is 0.5, and also d ¼ 0.5. Axelrod had set up his tournaments in this way, with an average round length of 200 interactions, i.e. a probabilistically determined chance of about 0.003 of ending an interrelation after each interaction, corresponding to d ~ 0.997 (1984/2006, 63ff., 212f.). While this is no advantage for defectors a high d generates a relative advantage for TFTs since a > c. In this way TFTs can be considered to be selfcommitted for some 200 interactions per round. In addition, Axelrod had assumed that an invading group of TFTs is relatively small so that it only marginally affects the ALL Ds’ payoffs. Defectors’ payoffs are considered unchanged (!) since the great bulk of the ALL Ds’ interactions would still take place among themselves. If, for instance, d ¼ 0.9 for both cooperators and
142
W. Elsner and T. Heinrich
defectors and b ¼ 5, a ¼ 3, c ¼ 1, d ¼ 0 the defectors gain on average only 10 with each other [a simplified version of (7a)]. The TFTs, in contrast, have to take into account considerable portions of interactions with ALL Ds, according to (7b). The example yields k/n > ! 1/17, i.e. only around 5.9% of invaders can succeed and survive in a “hostile” social environment (see Fig. 4 above for a location of the “Axelrod point” in the solution space). In this way, Axelrod was able, through “committed” large numbers of interactions, both to keep expectations (d) high and increase cooperators’ group size (k or k/n, resp.), while the minimum critical mass would be very small (below 4% in the example). This implies abandoning total connectivity by introducing selectivity through self-commitment. We illustrate the effect of partner selection in Fig. 5. Cooperative payoffs now may quickly exceed the average defector’s payoff. The payoff functions “with selection”, gsel. for cooperators and fsel. for defectors, can be represented as explained by: fsel:ALL D ðk=nÞ ¼ ðk=nÞ1=a ½c=ð1 d2 Þ þ b c þ ½ðn kÞ=n½c=ð1 d2 Þ
(10a)
and gsel:TFT ðk=nÞ ¼ ðk=nÞa ½a=ð1 d2 Þ þ ½ðn kÞ=n½c=ð1 d2 Þ þ d c;
0 a 1: (10b)
Fig. 5 Illustration of the effect of partner selection (and “Exhaustion” thereof) on the payoffs from cooperation (gsel.) and defection (fsel.), indicating the “Meso”-size area of the “Relevant Cooperating Group”, k*max
Coordination on “Meso”-Levels
143
Note that the cooperators’ curve is set on top of the “worst-case” cooperative function, the ALL C vs. ALL D case, equivalent to the very large group, where d2 may be very small. This may illustrate that the constitution of a meso-sized relevant cooperating group may occur even in a large population, depending on the exponents a and 1/a. This would of course be the case all the more on top of a “small group”, i.e. with flatter curves (as given in Fig. 2). As said, a reduction of the “minimum critical mass” compared to the linear case (Fig. 5) may occur, so that random diversification of behavior combined with early selection might bring about the “minimum critical mass”. A general solution to determine the lower and upper intersections of fsel. and gsel., from the condition ð10aÞ ¼ !ð10bÞ; is given by: ðk=nÞ1=a ½c=ð1 d2 Þ þ b c þ ðk=nÞðd cÞ ðk=nÞa ½a=ð1 d2 Þ ¼ d c: (11) This yields a complex function of k which has solutions with at least two values of k. As a numerical example assume n ¼ 1,000, a ¼ 0.5, d2 ¼ 0.1, and again b ¼ 5, a ¼ 3, c ¼ 1 and d ¼ 0. This will yield k*min 82 and k*max6 96. This illustrates that the equilibrium “relevant cooperating group” size is below the size of the whole population and a “meso”-sized institutional carrier group. This also reflects the fact that the established informal institution may bear, and even provoke, some degree of defection, by making it, with increasing k, ever more profitable to deviate. Any institution, in fact, exists, and may survive, in the face of a certain number of defectors. However, these defectors do no longer endanger the institution as such. Also, we must not necessarily think of individuals being clear-cut cooperators or defectors at any given point in time, but may think of certain portions of cooperative and defective actions in the sets of actions of every single individual. Finally, note again that we must not and do not assume that the average payoff from cooperation ever exceeds the original payoff value a from common cooperation at any group size.
Insights from Computer Simulation: An Agent-Based Model As we have mentioned in the previous section there are mechanisms contributing to the emergence of cooperation that lie beyond the scope of straightforward deterministic computation. Particularly, agents engage in interactive shaping and reshaping of the network structure in their direct environment, the basic effect of
144
W. Elsner and T. Heinrich
which has been outlined in a graphical analysis in the previous section as well. To understand the potential numerical extent of the critical factors, a more exact analysis is required; an analysis that is best accomplished by computer simulation of an agent based model. While it might be necessary to investigate this model for numerous different specifications of some key variables, for large numbers of agents and for many thousand iterations, we will limit our study to some relatively few settings and a rather limited number of agents and iterations in the current paper. We will however be able to show that partner selection as well as neighborhood structure in general have a strong influence on the existence, extent, and quality of the emergence of cooperation in the system as well as namely on the average speed of the development of “meso”-groups of cooperators. Further, other effects resulting from previously analyzed mechanisms such as memory (without which no non-idiosyncratic cooperative strategy like TFT would be possible) and the expected number of interactions or round length (the shadow of the future, p1) are also present and quantifiable. Of course, in order to proceed to matching such simulation results with empirical data they would have to be carefully evaluated which specifications of environment variables (some of which are discussed below). In this way, we might come close to systems of social interaction in real social-economic systems.
The Model Outline The model is programmed in C++ taking advantage of the significantly better performance and greater flexibility of this standard programming language compared to high level simulation environments. The model specifies a population with a number of agents n (most of the simulation runs presented in this section use n ¼ 64), all starting as defectors playing structured supergames of the prisoners dilemma with Axelrod’s payoff structure as mentioned (b ¼ 5, a ¼ 3, c ¼ 1, d ¼ 0). Further, all agents have a certain number of direct neighbors dn that is only temporarily changed in the process of partner selection. Note that this is an abstract graph topology without, not a regular lattice (for an illustration see Fig. 6). This is for two reasons: firstly, it avoids ex ante clustering as seen in regular lattices,13 secondly, the lattice structure would make the partner selection mechanism much more difficult.14 Agents interact with their direct neighbors (degree of neighborhood D ¼ 1) as well as second neighbors (the neighbors of their 13
For example for an infinite two-dimensional rectangular lattice with the Moore neighborhood (horizontal, vertical and diagonal edges) the clustering coefficient as defined by Watts and Strogatz (1998) is 3/7 ¼ 0.43. In our case on the other hand, for the graph used for this simulation, the clustering coefficient approaches 0 as the graph size tends to infinity. 14 The agent could not just terminate one connection to another agent but would have to transfer to another location on the lattice changing her entire neighborhood. While it would still have been possible to implement the simulation like this, it would not have had any particular advantages to justify the greatly reduced flexibility of the mechanism.
Coordination on “Meso”-Levels
145
Fig. 6 The neighborhood structure of the simulation model (for a setting with dn ¼ 4)
neighbors, degree of neighborhood D ¼ 2), third neighbors (the neighbors of their D ¼ 2 neighbors, degree of neighborhood D ¼ 3) etc. with probability w ¼ 1=ð2D dnD1 Þ D ¼ 1; :::; 1 in each round, a function that yields an expected value of interactions with neighbors of degree D starting with w dn ¼ 0.5 dn (for D ¼ 1) and strictly decreasing in D.15 This means, it is more likely for an agent to engage in an interaction with an unspecified direct neighbor (D ¼ 1) than in an interaction with an agent of any other degree of neighborhood.16 Also, agents will try to cut off their neighborhood connections to defectors, which succeeds with a probability of at most q ¼ 0.5, and declining with the expected accuracy of related information (that is, the age of the memory marking the neighbor in question as a defector). Agents have the opportunity to do so once every g rounds. Agents have a memory of mem periods (the following simulation runs with mem ¼ 4). They also have a capacity to monitor interactions of their neighbors of degree D ¼ 1 (direct neighbors) through a certain degree D ¼ m, where m must (in order to apply the above discussed memory funnel) be m < mem. 15
For technical reasons, the simulation program rounds small probabilities to 0 – otherwise the degrees of neighborhood to be considered (and thus the number of operations to be exercised by the simulation) would be infinite. 16 Otherwise, depending on dn, this may lead to situations with the probability to interact with a specific agent decreases in D but the combined probabilities for all agents of this degree of neighborhood increases in effect leading to a greater impact of more distant parts of the population.
146
W. Elsner and T. Heinrich
Finally, the dynamics of strategy choice are to be applied every g rounds. As every run of the simulation constitutes a sequence of repetitions of exactly the operations executed during g rounds, we may apply the technical term of iteration for this time span (g rounds). The dynamics are driven by a payoff (P) threshold T T ¼ TðPÞ ¼ MEANðPÞ hSDðPÞ; SD being the standard deviation, h being a parameter (the following simulation runs use either h ¼ 1 or h ¼ 1.5): If the payoff of an agent falls below T, the agent is forced to change her strategy, becoming a defector if she has been a TFT player or switching to TFT if she has been defecting. To use a variable non-constant threshold assures that the system will be neither inherently static nor oscillating17; mean and standard deviation of the distribution of the payoffs is used in order to separate a minor share of the population at the lower end of the success (payoff) spectrum.18 It is also reasonable to assume for real-world situations that actors (humans in this case) will compare their outcome to that of other actors and adjust their strategies by trial and error. At this point it is worth pointing out that the system is not any longer deterministic: The probability to meet any particular neighbor in a certain round, w, as well as the length of the sequence of interactions in any particular encounter (according to the probability p1) are stochastically determined. It is immediately obvious, that the distribution of payoffs is generally not concentrated at a single point, even if the whole population is composed purely of defectors (or purely of cooperators) assuring a non-trivial distribution SD(P)! ¼ 0. Thus, if the whole population starts as defectors, a certain share will have payoffs lower than T after the first round, thus switching to TFT; however lacking a group of cooperators in their neighborhood, most of them might end up being exploited and switch back in the next round (depending on other environment variables such as p1). The setting here is entirely different from that considered in the last sections with respect to (7a), (7b), (10a), and (10b) in that it does not any more consider homogeneous agents. Though the population perspective (10a and 10b) recognized the presence of local neighborhood effects (thus local differences in direct interdependence) by accounting for the effect of partner selection as being similar to an exponential transformation, this has of been a very general hypothesis without particular accuracy. One of the advantages of agent-based simulation is to include heterogeneity in detail considering every single agent with her particular 17
Oscillating refers to what would happen if with a hypothetical constant threshold the payoffs of all agents would fall below that value and thus all the agents would always have to change their strategy. 18 If the payoff distribution were symmetric, unbiased and non-trivial (not a single point), 25% of the population would fall below a value MEAN(P) SD(P) which is equivalent to the above threshold with the parameter h ¼ 1. However, the distribution will in our case generally be twopeaked and thus not symmetric. Still, the above threshold will seperate certainly less than half of the population (as long as the distribution is non-trivial SD(P)! ¼ 0) but more than zero of the least “lucky” agents.
Coordination on “Meso”-Levels
147
Table 1 Variables of the simulation model and their effect on the emergence of cooperation Effect on Symbol Meaning cooperation Probability of the round to continue for at least 1 further interaction Positive p1 A, d Incentive structure Positive B, c Incentive structure Negative g Number of rounds per iteration Positive h Threshold parameter Ambivalent n Number of agents Ambivalent dn Number of direct neighbors (network structure) Ambivalent mem Memory length Positive m Monitoring capacity (monitoring of m degrees of neighborhood) Positive q Probability of success for partner selection Positive
neighborhood structure and the actions and reactions of her particular neighbors. The system can be expected to show smaller shares of cooperators from the second iteration on; depending on the setting, this subpopulation will either form a sustainable group of cooperators that grows to dominate the system or not exceed a rather small number of former defectors that are “temporarily trying” cooperation only to return to their original strategy at the very next opportunity. In the former case, cooperators have to be “lucky” enough to find sufficiently many (for very small numbers of direct neighbors, only one may suffice) other cooperators in their neighborhood. Partner selection will then gradually allow them to cut exclude defectors while retaining links to other cooperators, thus leading to an expanding network of cooperation. There will, however, always be a certain share of defectors or “temporary defectors” that return to cooperation after very few iterations. As soon as defectors grow rare, cooperators will fall below the threshold, switch strategies, exploit their neighbors once, only to be excluded from the group and return to cooperation in another neighborhood. Thus, the share of cooperators will vary around a certain percentage even if cooperators succeed in forming a sustainable network. Ex ante, we can assume, that the defined variables influence the average share of cooperation as indicated by Table 1.
Some Results Parameter Constellation Figure 7 shows the usual behavior of the system for a setting with n ¼ 64, p1 ¼ 0.5,19 dn ¼ 3, m ¼ 1, (agents observe just their direct neighbors and the neighbors respective opponents), q ¼ 0.5 (imperfect partner selection), a threshold coefficient h ¼ 1 to compute the threshold. Agents with smaller payoffs are to 19
This is equivalent to an expected value of interactions per round of 1 + log footnote 5.
0.50.5
¼ 2. See
148
W. Elsner and T. Heinrich
Fig. 7 Computer simulation results: Sample run for n ¼ 64, p1 ¼ 0.5, m ¼ 1, h ¼ 1, g ¼ 2, dn ¼ 3 with partner selection. Development of the share of cooperators (TFT), share of successful cooperation, and payoffs of cooperators and defectors (Note: The scales differ: The cooperators’ mean payoff scale runs from 1 to 66 while the defectors’ mean payoffs are only between 13 and 37)
change their strategy; this evaluation is exercised after every 2 rounds (g ¼ 2). The development of the system is shown in Fig. 8 in an “artificial” 2-dimensional topology at six different stages (after iterations 1, 8, 16, 24, 32, and 64). “Artificial” means that 2-dimensional position and proximity of the agents do not have any particular meaning since the original topology is a generic graph, and thus not 2-dimensional in that sense; what is meaningful in the depiction is the color and relative number of connections. For the current simulation runs, we document only the first 64 iterations20 as for the benchmark case and the variations presented below 20
In Figs. 7 through 11, the iterations are counted beginning with 0 through 63.
Coordination on “Meso”-Levels
149
Fig. 8 Computer simulation results: Development of the system as illustrated in Fig. 7 in time. The two-dimensional topology has no particular meaning and is only applied for the purpose of visualization. Gray nodes (points) indicate defectors, gray lines indicate exploiting encounters in the last interaction (one of the two agents is a defector), black lines indicate cooperative encounters in the last interaction (both agents cooperators)
the system stabilizes after about 30–60 iterations. Further, for 64 agents and 64 iterations, the computation time is moderate even for a wider monitoring range m and neighborhood structures with higher dn, which allowed us to study a greater variety of parameters.21 For the benchmark case however (the case shown in 21
Simulations with larger numbers of agents and over longer time spans will be conducted in the future.
150
W. Elsner and T. Heinrich
Figs. 7 and 8) the most simple non-trivial neighborhood structure, the setting with 3 (dn ¼ 3) direct neighbors will be used.22
General Results Figure 7 shows that a small group of cooperators immediately comes into being resulting from some defectors payoffs not exceeding the payoff threshold value T. Thereafter, a network of cooperators23 forms and stabilizes as cooperators learn about their neighborhood and work to increase the share of cooperators in this neighborhood by means of partner selection. This process can be observed in the upper right graph of Fig. 7: The share of cooperation in cooperators behaviors refers only to the single iteration; while TFT players usually cooperate, they defect when confronted with known defectors. Hence, this share can be used to monitor the average believes of cooperators about their neighborhood. Furthermore, if this graph is smooth, the beliefs are mostly justified (this is in Fig. 7 the case from about the tenth iteration on). At first, the share of cooperation in cooperators behaviors decreases as they gradually identify the defectors. When they succeed to increase the share of cooperators in their neighborhood the share of cooperation in their encounters will rise again. The first phase of this process takes about eight to ten iterations. In Fig. 8, the first small group of cooperators can be seen after eight iterations gradually increasing during the next 20 iterations and then remaining roughly the same for the rest of the simulation. The share of cooperators in the system (see Fig. 7, upper left graph) increases to about 75% in the first 20 iterations as well where it seems to reach a stable state with only minor variations during the remaining iterations. The initial period is crucial for the mean payoff of cooperators (see Fig. 7, middle left graph) as well, with sharp increases while the level seems more stable afterwards. The mean payoff of defectors (see Fig. 7, middle right graph) on the other hand is subject to major turbulence but still raises in time although to a much smaller extant than the payoff of cooperators: With more cooperators in the system in later periods, defectors occasionally get the opportunity to exploit a cooperator not informed about this particular defector. Thus, defectors may be said to participate in the wealth institutional cooperation generates. With the mean payoffs of cooperators and defectors increasingly falling apart, the standard deviation of the total mean payoff (see Fig. 7, lower graph) increases as well. We now alter the settings and investigate individual effects using the sample simulation run depicted in Figs. 7 and 8 as a benchmark case. As the different data For dn ¼ 2 the graph would be a completely regular circle. “Network of cooperators” does in this case refer to the particular shape of the cooperating “meso”-group being part of the ad hoc network topology and forming a more stable area within this network since connections between cooperators are retained while everyone tries to cut connections to defectors.
22 23
Coordination on “Meso”-Levels
151
series shown for the run in Fig. 7 mostly resemble the same effects,24 we will limit this investigation to the data series on the share of cooperation in the system (the upper left graph in Fig. 7). In all, this result means that the emergence of cooperation is successful for the benchmark setting and occurs during the first 30 iterations. While the cooperating group is established rather quickly, the success of this group continues to improve over time (as seen from the share of cooperation in cooperators behavior and the mean payoffs of cooperators in Fig. 7). Having established the general lines along which the benchmark simulation, a simple but non-trivial setting evolves, we can continue to study effects of particular variables as well as the sensitivity of the system to change in the setting. While we also studied the behaviour with other incentive structures, changed probability to meet again (p1) and altered memory,25 the effects of partner selection, monitoring, and the neighborhood structure are particularly instructive.
The Effect of Partner Selection First, Fig. 9 shows the benchmark case with (imperfect) partner selection (left graphs) compared to another simulation run without partner selection but otherwise identical. A smaller group of cooperators even exists in the case without partner selection, but the group is unable to form a network (upper right graph), i.e., the cooperators are unable associate with other cooperators, thus virtually always defecting and of course sometimes being exploited (lower right graph). The about 20% cooperators indicated are not the same agents over time but always different agents changing their strategy, “experimenting” with cooperation. Other than in the partner selection case, however, these experiments prove unsuccessful for them. In all, partner selection proves to be most critical for the successful establishment of a cooperating group in the system. Without partner selection, the network topology remains static with some agents switching from defection to TFT and back; a cooperating group does not form. In the (imperfect) partner selection case on the other hand, the network is generally unstable. However, agents succeed in influencing their neighborhood such that in the ad hoc network structure a sufficiently stable area of cooperators (a “meso”-group) forms as discussed above. 24
The share of cooperation rises, the mean payoff of cooperators rises, the share of cooperation in cooperators payoffs rises and the standard deviation of payoffs rises as well. All these trends are, as we outlined consequences of the same process. Therefore it is generally sufficient to illustrate the following investigations with just one or two of these graphics. 25 We decreased the memory length, the probability for the round to continue (p1), and the TFT vs. TFT payoff a (of course keeping a larger as c, thus retaining the prisoners dilemma structure). All three changes proved to have negative effects slowing the emergence of cooperation considerably and also leading to a smaller final share of cooperators (though this share had already stabilized). An increase in this three values instead of decreasing them had, of course, the contrary effect, accelerating the emergence of cooperation. As these results are had been expected and discussed extensively in theoretical considerations, we do not include any extensive graphic documentation for these simulation runs.
152
W. Elsner and T. Heinrich
Fig. 9 Partner selection in the simulation model: Benchmark run (left) and a simulation run without partner selection, but otherwise identical (right)
Fig. 10 Monitoring in the simulation model: Simulation with benchmark settings, monitoring of the neighborhood up to the first degree (left) and a simulation with monitoring of the neighborhood up to neighbors of the third degree (right)
The Effect of Monitoring Figure 10 deals with the effect of monitoring. The left graph presents the above mentioned benchmark setting with monitoring of just the direct neighborhood, in the right one the setting allows monitoring of 3 of neighborhood (m ¼ 3)
Coordination on “Meso”-Levels
153
connections (which with three direct neighbors for each agent and 64 agents in the system is almost half of the population). We can see, that the final stable value of roughly 75% cooperators is not changed by this variable; however, the better the monitoring, the faster this level of cooperation is achieved.
The Effect of the Neighborhood Structure Finally, other environment variables are important as well. Figure 11 shows the effect of the neighborhood structure. The benchmark settings (three direct neighbors, left graph) is compared to a simulation run with four direct neighbors (right graph). While cooperators thrive equally well in the both cases, cooperation emerges much faster in the latter case. There seems to be a positive effect on cooperation from the density of the network. However, in another simulation run (not graphically depicted) this effect was reversed for a different probability of the current interaction to continue (p1 ¼ 0, that is every round lasts only one interaction). We conclude that there are negative aspects of the number of direct neighbors as well. Possibly, the capacity of the group of cooperators to expand is greater (being able to establish more connections per agent) while the danger of being exploited is also greater. The relation of the number of neighbors to the total number of agents in the population is probably important as well. We programmed the simulation to not allow agents with significantly less direct connections than the target number dn of direct neighbors as specified in the settings, thus forcing defectors into the network even if no one wants to be neighbored to them. This, of course, creates advantages for defectors.
Reflexion To summarize, it can be shown that positive effects of partner selection, monitoring, and other factors such as the neighborhood structure, the memory length, or the
Fig. 11 The neighborhood structure in the simulation model: Benchmark run (left) and a simulation with four direct neighbors for each agent ceteris paribus (right)
154
W. Elsner and T. Heinrich
incentives of the underlying game, exist. Several factors have very similar results. For example changing the probability to meet again is in effect (as is obvious from the statistical expectation) equivalent to a change in the incentive structure in favor of cooperators. The working of memory, monitoring, and partner selection is more complex but works in the same direction. Some of the mechanisms tend to alter the sustained share of defectors in the system, thus the relative size of the cooperating group, others increase the speed of the development of cooperating groups. As cooperators will try to retain connections to each other while attempting to exclude known defectors from their neighborhood, the (pre-existing) network is more stable within cooperating groups. The emergence of cooperation in general takes place as the formation connections and enlargement of this stable network between the first cooperators in the population as illustrated in Fig. 8. Compared to the crucial critical masses that were derived in the preceding sections, in an agent-based simulation setting, the crucial values are statistical means and expected values. The emergence of cooperation does not depend on fixed minimum shares of cooperators but rather on the likelihood that at some point a cooperator will be able to connect to sufficiently many other cooperators to raise her payoff above the current threshold value. The same is true for the maximum critical mass: How many defectors will be able to exploit a largely cooperating system at the same time? If there are too many defectors, will they be forced to switch to cooperation or will the group of cooperators collapse?
A Real-World Example: Trust Polls, Country Size, and Macro-Performance Before we come to conclude, a short example should suffice to indicate the empirical, and topical, relevance of “meso” size. Trust polls, nowadays carried out regularly in most countries on behalf of the World Bank and others major organizations (see, e.g., Knack and Keefer 1997; Keele and Stimson 2002), namely the World Value Survey, include general trust questions, such as: “Do you think you can trust the next person you will encounter?” Obviously, this is fully equivalent to the expectation to meet a cooperative agent in the next round, i.e. contingent trust. Such polls have brought about surprising differences and even considerable divergence over time in trust levels among presumably similar and converging countries (e.g., leading OECD countries). However, this is particularly the case between large and small countries. Similarly surprising was the fact that such trust levels have turned out to be highly correlated with economic and social performance in broad areas (see also, e.g., O’Hara 2008; H€akli and Minca 2009; Farrel 2009; Christoforou 2010) moreover trust has specifically been found to have a positive effect on innovation (Dakhli and de Clercq 2004; Wilkinson and Pickett 2009).
Coordination on “Meso”-Levels
155
It has indeed been argued in the more recent “varieties of capitalism” literature that it may particularly be smaller countries which display such high trust and performance (see, e.g., Kesting and Nielsen 2008; see also already Fukuyama 1995, Chaps. 3, 28, pp. 23ff., 335ff.). Conventional economics has mostly stressed the disadvantages and volatility of small countries (see, e.g., Barro and Sala-i-Martin 2003; Alesina and Spolaore 2003). The empirical record of the relation between smallness and economic performance is not obvious and clear-cut. Many have argued in favor of advantages of smaller countries (see, e.g., Kuznets 1960; Easterly and Kraay 2000), particularly their superior adaptability, learning and cooperation conditions as dependent on proximity and interaction density (see, e.g., Cantner and Meder 2008). Considering critical factors in the “deep structure” of countries, such as futurity (expectations) and size as investigated in this paper, specifically small countries that are internally structured in overlapping functional, spatial, professional, organizational, or jurisdictional communities, groups, networks, associations, etc. have turned out to display such favorable trust (and high social capital, indicated, for instance through membership and participation), and performance properties. It appears that the principle of “smallness” has been internally generalized by them (through cumulative historical process rather than deliberate political design, of course), so that they can make use of “meso”-sized “arenas”, “platforms”, and groups. In this way they may generate trust and facilitate the emergence of institutionalized cooperation, related innovative action capabilities, and macro performance. This typically applies to Scandinavian countries, given their overall sizes, residential structures, enterprise size structures, dominant interactive workplace organization, general organizational participation, informal networks structures and policy frame-setting (particularly safeguarding some level of social integration, stability and security) (see for instance the rich material on the Danish case in Jorgensen 2002; Lundvall 2002; Edquist and Hommen 2008; Christensen et al. 2008; Holm et al. 2008, 2010). In this way, even a 5.4 mio population country may be sufficiently interconnected through “meso”-arenas to mobilize reputation chains and to generate high levels of trust, institutionalized cooperation, and socio economic performance. Note that the Scandinavian countries (incl. NL, A, and some others) are not only leading in social areas like little poverty, even distribution, high employment, education, social security or subjective well-being but also are in the leading groups with innovation rates, GDPpc, speed of structural change, labor market mobility, globalization rates, future expectations, etc. Also, they are largely not too much affected by the current financial meltdown. Find some indicators of the relative positions of Scandinavian countries and small countries in general in Table 2. The World Bank, the OECD, EU, and others may have to proceed to a more complex explanation of trust, and the theoretical framework of institutional emergence, with its co-evolutionary dimensions of contingent trust (expectations) and “meso”-sized group formation, might play a more important role in this field in the future.
156
W. Elsner and T. Heinrich
Table 2 Indications of “Meso” structures, institutionalized cooperation, general trust and macro performance General trust survey data Selected countries Denmark Finland Norway Sweden Netherlands To compare: Germany
1990 57.7 62.7 65.1 66.1 53.5 32.9
1999/2000 66.5 57.4 67.2a 66.3 60.1 37.5
2004–2008 71.3a 58.8 69.2a 68.0 44.5 34.1
General trust correlations (13 smaller and larger OECD countries, if not indicated otherwise) Indicators Population size Membership Share of cooperating innovators GDPpc GDP Growth
Correlations with general trust, survey wave 2004–2008 0.34 0.88 (9 countries) 0.80 0.51 0.34
Other correlations between empirical indicators of “Meso” and cooperation Population size/share of cooperating innovators: 0.89 (6 countries) Membership/share of cooperating innovators: 0.84 (5 countries) Source: Own calculations based on World Value Survey data; http://www.worldvaluessurvey.org a Estimated
Other than the usual political, welfare-state, social integration and social-security argument explaining the “Scandinavia plus” variety of capitalism, we would suggest to empirically investigate the size dimension working in the “deep structure” of the interaction systems of countries as one critical factor among others. The methodological problem such an empirical study immediately faces is that many “meso” structures are already historically given in those countries, e.g., settlement systems (little urbanization, rural structures), membership and participation rates (i.e. social capital) network cooperations of firms and the like. Can we isolate these factors and their impacts on institutionalization and “meso platform” emergence? On the other hand, can empirically identify the crucial factors and mechanisms of our model, such as incentives, expectations, reputation chains and partner selection to trigger further “meso” generation? We can expect only to explain the continuing reproduction, sometimes the growth and the sometimes surprising resilience under adverse conditions rather than the original emergence of meso structures, supporting expectations (namely general trust), institutionalization of cooperation and, finally, macro performance. Specifically, can we explain further trust growth in spite of fast change, partial de-regulation, globalization? The overall causal chain might overlap between the model and empirical data in the following way: 1. Given formal and informal inner “meso” structures: empirical (e.g., population size; settlement structures; membership; share of cooperative innovators).
Coordination on “Meso”-Levels
157
Fig. 12 An empirical research setting: Socio-economic conditions and results of “Meso”structures – empirical indicators (circles) and theoretical mechanisms and predictions (rectangles) combined. A tentative scheme
2. The “mechanisms”: model. 3. Reproduction of informal inner “meso” structures: model, empirical (e.g., D membership). 4. High and increasing trust/institutionalization of cooperation/social capital regeneration: model, empirical (e.g., D general trust; D share of cooperative innovators). 5. High macro-performance: model, empirical (e.g., GDPpc; DGDP). Figure 12 gives an illustration of the sequence of an empirical research based on our model and focusing on trust and performance in “small (and well-structured)” vs. large countries.
Conclusions The “meso”-layer of economic systems (populations) is both an agglomeration of “meso”-sized groups and a loosely connected network. First, the perspective of the individual agent on the “meso”-layer is her peer group which generally is beneficial for her but for a number of limitations (of limited memory and monitoring, interaction capacity, limited ability to engage in partner selection, etc.) cannot exceed a certain size. Further, as shown in the model as well as in the simulations, a large peer group (i.e. in the simultation setting a high number of direct neighbors, which in turn also have a high number of direct neighbors, etc.) combined with
158
W. Elsner and T. Heinrich
limited memory and limited success in partner selection harbors the imminent danger to be exploited by defectors. Henceforth, groups tend to be small with respect to the total network size in successful networks with larger share of total cooperation. Second, taking a broader view, these peer groups are connected, forming a network, and, if successful, a network of cooperation enjoying a relative stability. This network, likely containing clusters, i.e., dense and sparse areas, is not only documented in the simulation conducted. It may also be interpreted as actual social-economic entities such as firms, innovation networks, industrial sectors, regions, etc. Third, cooperators must attain a certain share of the population so that there is a realistic chance for them to encounter other cooperators, benefit from mutual cooperation and form a network (a minimum critical mass). In large networks the existence of a number of cooperators that could theoretically form their own network is not enough. As they are rather unlikely to meet each other before disenchantedly switching back to defection. Thus, the number of cooperators to make the creation of a cooperating group likely is much higher. Fourth, any mostly cooperating population sustains a certain share of defectors or temporary defectors; only if this share becomes large, the network and institution of cooperation collapses. “Temporary defectors” means agents that switch to defection when under the impression that cooperation doesn’t pay for her or, more technically having fallen below the payoff threshold, then exploiting some cooperators, losing their reputation and by partner selection, most neighbors, and then returning to cooperation in another neighborhood. This is, of course, equivalent to a maximum critical mass of cooperators in such a system. In this paper, the co-evolution of “meso” size, stable cooperating networks, and institutional cooperation has been investigated in the framework of evolutionary theory and the emergence of cooperation. Further, we have considered a number of particular mechanisms in a partially stochastic population approach and a computer simulation of some few particular parameter settings using an agent-based model. The critical factors explored here are: 1. The incentive structure. 2. A distribution of a population in a topology, i.e. proximity and “neighborhood structure”, with corresponding rules of interaction, (e.g., mobility rules). 3. An initial (stochastic, although “motivated”) distribution of strategies, and a “minimum critical mass” of cooperators (in the simulation model this is equivalent with the dynamics of strategy change which occurs if payoffs are below the threshold T). 4. “Agency” mechanisms such as partner selection, monitoring, memorizing, as well as reputation building. 5. Potential additional mutual externalities of cooperation, cumulative learning, or “synergies”, typically justifying a degressively growing cooperative payoff curve; and, finally.
Coordination on “Meso”-Levels
159
6. The “relevant cooperating group” (maximum critical mass), typically smaller than the total number of agents and pervading the whole population as loosely connected networks. This may be represented also by segregation patterns. Particularly, we were able to demonstrate that the capacity of individuals to influence the structure of their peer group (partner selection) as well as the incentive structure, and the neighborhood structure takes effects on both whether cooperation emerges and how large the network of cooperation can become. While this depends also on memory, monitoring, reputation, etc., particularly the former agency mechanisms strengthen the easiness and rapidity of the initial development of the institution of cooperation. As the simulation addresses only some few parameter settings and considered a relatively small number of both agents and iterations, much remains to be done. Of particular interest is the study of the stability of the results for larger populations, other payoff threshold levels, other (denser) neighborhood structures, the effect of clustering, less favorable incentive structures as well as other mechanisms. Further investigation will also include considering growing or shrinking populations and matching the simulation settings with empirical values of trust levels, group sizes, group memberships, etc. as indicated in section “A Real-World Example: Trust Polls, Country Size, and Macro-Performance” and illustrated in Fig. 12. A more general “meso”-economics may be envisaged in this way, i.e. the economics of the emergent “mid-size” level for the coordination, capability, and performance increasing of a population in an evolutionary process. Its applications, such as cultural emergence, production and innovation standards, technological paradigms and trajectories, cultures, information governance systems, particularly open source governance in a fragmented and interconnected “new” economy, spatial industrial organization and agglomeration, general trust and macro performance, etc., have high practical relevance in a broad way of realworld developments. Of course, our approach also confirms that we cannot expect a non-trivial, automatic emergence of a stable, self-sustaining, Pareto-efficient institutionalized equilibrium, i.e., no perfect “deliberation-free”, “hierarchy-free”, or “state-free” stable private self-organization or “spontaneous order”. For instance, the initiation and generation of a “minimum critical mass” may require a strong role for public policy (see, e.g., Schelling 1978; Cooper and John 1988; Elster 1989, 31ff.; Elsner 2001; on the public provision of selective incentives and a high level of futurity in the perspectives of the agents). This may lead to a new interactive public policy design, focused on specific “frame-setting” to trigger the causal factors, which in turn may allow for institutional emergence, co-evolving with meso structure, and collective action capability among private agents.26 This may particularly give space for the emergence of the capability and inclination to learn and innovate, 26
We have put an interactive or institutional, i.e., frame-oriented public policy in the focus of a “negotiated economy” or “new meritorics” (see Elsner 2001). This might have to be modeled using cooperative game theory (see, e.g., McCain 2009).
160
W. Elsner and T. Heinrich
where otherwise too much volatility, turbulence, uncertainty, and too little futurity would undermine efforts to learn and innovate. Thus, innovative behavior may, paradoxically, emerge through the stabilization of behaviors and expectations attained by institutionalized coordination (see, e.g., Boudreau et al. 2008). The co-evolutionary process of “meso”-economics investigated here is far from being fully understood and sufficiently elaborated. Although network formation and size has becoming an important dimension of many socio-economic approaches and complex models, further strengthening relevant, applied, empirical, and policy-oriented economic research, such as the topical research on “meso”structures, “general trust” and macro performance, requires further elaboration and simulation of the conditions, logic, process, and socio-economic effects of “meso”. Acknowledgements Earlier versions of the paper on which the authors drew for this chapter have been presented at the 2006 Schloss Wartensee Workshop on “Evolutionary Economics”, St. Gallen, CH (the authors wish to express their thanks to K. Dopfer and the discussants there), the January 2008 AFEE session in New Orleans, the EAEPE conference in Rome, November 2008, the ASSA meetings in San Francisco, January 2009 (thanks to P. Arestis and J. Galbraith for the discussion there), and the EAEPE conference in Amsterdam 2009, as well as the annual conference of the Section of Evolutionary Economics of the Verein f€ur Socialpolitik in Jena 2009, as well as several seminars and workshops at the PSU Portland (thanks to J.B. Hall and discussants), the MPI Jena (thanks to U. Witt and discussants), the New School NYC (thanks to D. Foley and participants), and the UM Kansas City (thanks to J. Sturgeon and discussants). The authors are especially grateful to H. Schwardt and M. Greiff for numerous discussions of all details of the current paper and anonymous referees who commented on earlier versions published in the Journal of Socio-Economics in 2009 and the Journal of Evolutionary Economics in 2010. However, the authors insist on the property rights of all remaining deficiencies.
References Alesina A, Spolaore E (2003) The size of nations. MIT Press, Cambridge, MA Ahdieh R (2009) The visible hand: coordination functions of the regulatory state. Emory Public Law Research Paper No. 09-86; Emory Law and Economics Research Paper No. 09-50 Arthur WB (1994) Inductive reasoning and bounded rationality (the El Farol problem). Am Econ Rev 84:406 Arthur WB (1989) Competing technologies, increasing returns, and lock-in by historical events. Econ J 99:116–131 Axelrod R (1984/2006) The evolution of cooperation. Basic Books, New York (rev. ed. 2006) Aydinonat NE (2007) Models, conjectures and exploration: an analysis of Schelling’s checkerboard model of residential segregation. J Econ Meth 14(4):429–454 Ayres RU, Martina´s K (2005) On the reappraisal of microeconomics. Edward Elgar, Cheltenham Barro RJ, Sala-i-Martin X (2003) Economic growth. MIT Press, Cambridge, MA Batten DF (2001) Complex landscapes of spatial interaction. Ann Reg Sci 35(1):81–111 Binmore K (1998) Review of Axelrod, R., the complexity of cooperation. JASSS 1(1) http://jasss. soc.survey.ac.uk/1/1/reviewl.html Boudreau KJ, Lacetera N, Lakhani KR (2008) Parallel search, incentives and problem type: revisiting the competition and innovation link. Harvard Business School Technology and Operations Mgt. Unit Research Paper No. 1264038 Bush PD (1987) The theory of institutional change. J Econ Issues XXI:1075–1116
Coordination on “Meso”-Levels
161
Cantner U, Meder A (2008) Regional effects of cooperative behavior and the related variety of regional knowledge bases. University of Jena, Germany (mimeo.) Chen P (2008) Equilibrium illusion, economic complexity and evolutionary foundation in economic analysis. Evol Inst Econ Rev 5(1):81–127 Christensen JL, Gregersen B, Johnson B, Lundvall BA, Tomlinson M (2008) An NSI in transition? Denmark. In: Edquist C, Hommen L (eds) Small country innovation systems. globalization, change and policy in Asia and Europe. Edward Elgar, Cheltenham, pp 403–441 Christoforou A (2010) Social capital and human development: an empirical investigation across European countries. J Inst Econ 6(2):191–214 Cooper R, John A (1988) Coordinating coordination failures in Keynesian models. Q J Econ CIII (3):441–463 Dakhli M, de Clercq D (2004) Human capital, social capital, and innovation: a multi country study. Enterpren Reg Dev 16:107–128 David PA (1985) Clio and the economics of QWERTY. Am Econ Rev 75(2):332–337 Davis JB (2007) Complexity theory’s network conception of the individual. In: Giacomin A, Marcuzzo MC (eds) Money and markets a doctrinal approach. Routledge, Abingdon, pp 30–47 Davis JB (2008) Complex individuals: the individual in non-Euclidian space. In: Hanappi G, Elsner W (eds) Advances in evolutionary institutional economics: evolutionary mechanisms, non-knowledge, and strategy. Edward Elgar, Cheltenham Dejean S, Penard T, Suire R (2008) Olson’s paradox revisited: an empirical analysis of filesharing behaviour in P2P Communities. CREM, University of Rennes, F, mimeo Demange G, Wooders M (eds) (2005) Group formation in economics. Networks, clubs, and coalitions. Cambridge University Press, Cambridge Devezas TC, Corredine JT (2002) The nonlinear dynamics of technoeconomic systems. An informational interpretation. Technol Forecast Soc Change 69:317–357 Dolfsma W, Verburg R (2008) Structure, agency and the role of values in processes of institutional change. J Econ Issues XLII(4):1031–1054 Dolfsma W, Leydesdorff L (2009) Lock-in & break-out from technological trajectories: modeling and policy implications. Technol Forecast Soc Change (forthcoming) http://leydesdorff.net/ breakout/breakout.pdf Dopfer K (ed) (2001) Evolutionary economics: program and scope. Kluwer, Boston Dopfer K (2007) Schumpeter’s economics. The origins of a meso revolution. In: Hanusch H, Pyka A (eds) The Elgar companion to neo-Schumpeterian economics. Edward Elgar, Cheltenham Dopfer K, Potts J (2008) The general theory of economic evolution. Routledge, London Dopfer K, Foster J, Potts J (2004) Micro-meso-macro. J Evol Econ 14:263–279 Dosi G, Winter SG (2000) Interpreting economic change: evolution, structures and games. LEM Working Paper Series 2000/08, Pisa, Italy Easterly W, Kraay A (2000) Small states, small problems? income, growth, and volatility in small states. World Dev 28(11):2013–2027 Eckert D, Koch S, Mitloehner J (2005) Using the iterated prisoner’s dilemma for explaining the evolution of cooperation in open source communities. In: Scotto M, Succi G (eds) Proceedings of the first international conference on open source systems, Genova Edquist C, Hommen L (eds) (2008) Small country innovation systems. Globalization, change and policy in Asia and Europe. Edward Elgar, Cheltenham, Chapters 1, 12 Elsner W (2000) An industrial policy agenda 2000 and beyond – experience, theory and policy. In: Elsner W, Groenewegen J (eds) Industrial policies after 2000. Kluwer, Boston, pp 411–486 Elsner W (2001) Interactive economic policy: toward a cooperative policy approach for a negotiated economy. J Econ Issues XXXV(1):61–83 Elsner W (2005) Real-world economics today: the new complexity, co-ordination and policy. Rev Soc Econ LXIII(1):19–53 Elster J (1989) The cement of society. A study of social order. Cambridge University Press, Cambridge
162
W. Elsner and T. Heinrich
Farrel H (2009) Introduction: the political economy of trust: interests, institutions and inter-firm cooperation in Italy and Germany. Cambridge University Press, New York Foley DK (1998) Introduction to barriers and bounds to rationality. In: Foley DK (ed) Barriers and bounds to rationality: essays on economic complexity and dynamics in interactive systems, by Albin, P.S., with an introduction by Foley, D.K. Princeton University Press, Princeton, NJ Fukuyama F (1995) Trust. The social virtues and the creation of prosperity. Free Press and Penguin Books, New York Galeotti A, Goyal S, Jackson M, Vega-Redondo F (2010) Network games. Rev Econ Stud 77:218–244 Gibbons R (2006) What the Folk Theorem doesn’t tell us. Ind Corp Change 15(2):381–386 Goyal S (1996) Interaction structure and social change. J Inst Theor Econ 152:472–494 Goyal S (2005) Learning in networks. In: Demange G, Wooders M (eds) Group formation in economics: networks clubs and coalitions. Cambridge University Press, Cambridge, pp 122–167 Gruene-Yanoff T, Schweinzer P (2008) The roles of stories in applying game theory. J Econ Meth 15(2):131–146 Harmsen-van Hout MJW, Dellaert BGC, Herings PJJ (2008) Behavioral effects in individual decisions of network formation. Maastricht Research School of Economics of Technology and Organizations RM/08/019 H€akli J, Minca C (2009) Social capital and urban networks of trust. Ashgate, Farnham Henrich J, Boyd R, Bowles S, Camerer C, Fehr E, Gintis H (2004) Foundations of human sociality. Economic experiments and ethnographic evidence from fifteen small-scale societies. Oxford University Press, Oxford Hodgson GM (2000) From micro to macro: the concept of emergence and the role of institutions. In: Burlamaqui L et al (eds) Institutions and the role of the state. Edward Elgar, Cheltenham, pp 103–126 Hodgson GM (2006) What are institutions? J Econ Issues XL(1):1–25 Holm JR, Lorenz E, Lundvall BA, Valeyre A (2008) Work organisation and systems of labour market regulation in Europe. Paper presented at the 20th EAEPE annual conference, Rome (mimeo.) Holm JR, Lorenz E, Lundvall B-ke, Valeyre A (2010) Organizational learning and systems of labor market regulation in Europe. Ind Corp Change 19(4):1141–1173 Jennings FB (2005) How efficiency/equity tradeoffs resolve through horizon effects. J Econ Issues 39(2):365–373 Jorgensen H (2002) Consensus cooperation and conflict. The policy making process in Denmark. Edward Elgar, Cheltenham Jun T, Sethi R (2009) Reciprocity in evolving social networks. J Evol Econ 19:379–396 Keele LJ, Stimson JA (2002) Measurement issues in the analysis of social capital. World Bank Working Paper Kesting S, Nielsen K (2008) Varieties of capitalism – theoretical critique and empirical observations. In: Elsner W, Hanappi G (eds) Varieties of capitalism and new institutional deals: regulation, welfare, and the new economy. Edward Elgar, Cheltenham Kiman A (1998) Economies will interacting agents. In: Cohendet P et al (eds) The economics of networks, vol 17–51, Interaction and behaviors. Springer, Berlin, p 19 Knack S, Keefer P (1997) Does social capital have an economic payoff? A cross-country investigation. Q J Econ 112(4):1251–1288 Knudsen T (2002) The evolution of cooperation in structured populations. Constit Polit Econ 13:129–148 Kollock P (1994) The emergence of exchange structures: an experimental study of uncertainty, commitment, and trust. Am J Sociol 100(2):313–345 Konno T (2010) A condition for cooperation in a game on complex networks. SSRN Working Paper, http://ssrn.com/abstract¼1561167
Coordination on “Meso”-Levels
163
Kuznets S (1960) Economic growth of small nations. In: Robinson EAG (ed) Economic consequences of the size of nations. Macmillan, London, pp 14–32 (repr 1963) Leydesdorff L (2007) The communication of meaning in anticipatory systems: a simulation study of the dynamics of intentionality in social interactions. Vice-presidential address at the 8th international conference of computing anticipatory systems, Lie`ge, Belgium, Mimeo Lindgren K, Nordahl MG (1994) Evolutionary dynamics of spatial games. Physica D 75:292–309 Lundvall BA (2002) Innovation, growth and social cohesion. The Danish model. Edward Elgar, Cheltenham Marwell G, Oliver P (1993) The critical mass in collective action. A micro-social theory. Cambridge University Press, Cambridge McCain RA (2009) Game theory and public policy. Edward Elgar, Cheltenham Mengel F (2009) Conformism and cooperation in a local interaction model. J Evol Econ 19:397–415 Oestreicher-Singer G, Sundararajan A (2008) The visible hand of social networks in electronic markets. Mimeo. New York University, New York O’Hara PA (2008) Uneven development, global inequality and ecological sustainability, recent trends and patterns, GPERU working paper. Curtin University, Perth Olson M (1965) The logic of collective action. Harvard University Press, Cambridge, MA Ostrom E (2007) Challenges and growth: the development of the interdisciplinary field of institutional analysis. J Inst Econ 3(3):239–264 Page F, Wooders M (2007) Strategic basins of attraction, the path dominance core, and network formation games. Working Paper 2007-020, Center for Applied Economics and Policy Research Pyka A (1999) Der kollektive Innovationsprozess. Eine theoretische Analyse informeller Netzwerke und absorptiver F€ahigkeiten. Duncker & Humblot, Berlin Schelling TC (1969) Models of segregation. Am Econ Rev 59(2):488–493 Schelling TC (1973) Hockey helmets, concealed weapons, and daylight saving: a study of binary choices with externalities. J Conflict Resolut XVII(3):211–243, reprinted in Schelling, 1978 Schelling TC (1978) Micromotives and macrobehavior. W.W. Norton, New York Schotter A (1981) The economic theory of social institutions. Cambridge University Press, Cambridge Spiekermann KP (2009) Sort out your neighbourhood. Public good games on dynamic networks. Synthese 168:273–294 Stanley EA, Ashlock D, Tesfatsion L (1994) Iterated prisoner’s dilemma with choice and refusal of partners. In: Langton CG (ed) Artificial life III, vol XVII, Santa fe institute studies in the sciences of complexity. Addison-Wesley, Redwood City, CA Traulsen A, Nowak MA (2006) Evolution of cooperation by multilevel selection. PNAS 103(29): 10952–10955 Tucker C (2008) Social interactions, network fluidity and network effects. NET Institute Working Paper No. 08-30 van Staveren I (2001) The values of economics. An Aristotelian perspective. Routledge, London Vinkovic D, Kirman A (2006) A physical analogue of the Schelling model. PNAS 103(51): 19261–19265 Watts DJ, Strogatz S (1998) Collective dynamics of ‘small-world’ networks. Nature 393 (6684):440–442 Watts DJ (1999) Small worlds. the dynamics of networks between order and randomness. Princeton University Press, Princeton, NJ Wilkinson R, Pickett K (2009) The spirit level. Why more equal societies almost always do better. Penguin, London, pp 50–61 Yamagishi T (1992) Group size and the provision of a sanctioning system in a social dilemma. In: Liebrand WBG, Messick DM, Wilke HAM (eds) Social dilemmas. Theoretical issues and research findings. Pergamon, Oxford, pp 267–287
.
Part III
Sectors Matter in Practice
.
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences Hiroshi Yoshikawa and Shuko Miyakawa
Introduction The balanced economic growth is an imaginary creature which can make sense only in theoretical models. In real economic growth, various sectors always grow at markedly different rates. This is a necessity if economic growth is ultimately generated by innovations because innovations do not pertain to the economy as a whole but rather are specific to particular firms or sectors (Schumpeter 1934, 1939). The differences in scale economies across sectors are another factor which generates the unbalanced growth (Young 1928). As a result of the unbalanced growth, the structure of an economy changes over time. Historically, in all the advanced economies, the share of agriculture had kept declining in the process of growth; Manufacturing industry and services (or tertiary sector) took the place of agriculture. Within manufacturing, industrial structure changes over time. These processes are well documented by many economists (Clark 1957; Kuznets 1959; Chenery 1960; Ohkawa and Rosovsky 1973; Rostow 1990). In fact, smooth changes in industrial structure appear to be a necessary condition for economic growth. Here, we can usefully observe the difference in growth rates between Japan and Argentine. As of 1900, the per capita income of Argentine was 2,756 US dollars, more than double that of Japan which was only 1,135 US dollars (Appendix D of Maddison 1995). After a century, however, the position had been reversed; in 1994, the per capita incomes of Argentina and Japan are 8,373 and 19,505 US dollars, respectively. Changes in industrial structure and a lack thereof must have something to do with this outcome. Plainly, we cannot miss changes in industrial structure for our full understanding of economic growth. In this chapter, we document the changes in industrial structure in postwar Japan in detail. We also present a brief international comparison
H. Yoshikawa (*) • S. Miyakawa University of Tokyo, Tokyo, Japan e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_7, # Springer-Verlag Berlin Heidelberg 2011
167
168
H. Yoshikawa and S. Miyakawa
in section “International Comparison”. An important problem is, of course, what are the fundamental causes for changes in industrial structure which occur in the process of economic growth. We consider this problem in section “Theoretical Discussion”.
Experiences of Postwar Japanese Economy Table 1 shows the changes in shares of three sectors in Japan during 1950–1995. It well demonstrates the significant changes in industrial structure in postwar Japan. In this section, following (Ozaki 2004), we document these changes in greater detail with the help of the “skyline diagram”. The skyline diagram can be understood with the following equation. Suppose that Y is the sum of components, Xi ði ¼ 1; ; nÞ. Y¼
n X
Xi
i¼1
Then, the growth rate of Y becomes the weighted average of the growth rates of Xi ’s; n DY X Xi DXi (1) ¼ Y Xi Y i¼1 The skyline diagram shows the share of Xi , namely Xi =Y horizontally, and the growth rate of Xi vertically. Thus, the area of the i-th rectangle corresponds to the contribution of the growth of Xi to that of Y. By definition, the sum of the areas of all the rectangles is equal to the growth rate of Y (1). In this way, the skyline diagram allows us to visually understand the contributions of Xi to Y. Figures 1a–10a show the contribution of each sector to the growth rate of nominal GDP for consecutive 5 year from 1955–1960 to 2000–2005. The horizontal axis of the figure is the share of each sector in nominal GDP, while the vertical axis shows the growth rate of each sector “cumulative for 5 years”. Therefore, the area of each rectangle expresses the “contribution” of each sector to the growth of nominal GDP cumulative for 5 years. Figures 1b–10b are the skyline diagram for manufacturing industries rather than for GDP. By the skyline diagrams, we can understand at a glance which industry is a leading sector in each period. For reference, theappendix i shows the relative contribution, that is the contribution of divided by the growth rate DY =Y . In what follows, we each sector, XYi DX Xi explain the growth pattern for each period by using these diagrams. Table 1 Shares of three sectors in GDP: Japan
1950 1970 1995
Primary sector 26.0 6.1 1.8
Secondary sector 31.8 44.5 33.8
Tertiary sector 42.2 49.4 64.4
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
169
a
Growth Rate 5YOY (%)
180 160
Elec., gas and water 2.5×105.1
140
Construction 5.6×136.5
120
Real estate 7.6×161.6 Finance and insurance 3.5×66.3
Manufacturing 34.6×132.5
100
Activities Share×Growth
Transport and communications 7.4×94.5
Wholesale and retail 11.6×108.1
80 60
Service 7.5×42.7 Government services 6.4×58.4
Mining 1.5×49.5
40 Agriculture, forestry and fishing 13.1×26.2
20 0
0
10
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 1a GDP skyline analysis (1955–1960) Note: The sum is not necessary equal to 100 because of statistical errors
(1) 1955–1960 The so-called “high-growth era” of the Japanese economy started in 1955, and ended around 1970. The years 1955–1960 correspond to the first 5 years of the highgrowth era. Nominal GDP grew by 91.3% cumulative for 5 years. The contribution
170
Growth Rate 5YOY (%)
b
H. Yoshikawa and S. Miyakawa
400 380 360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0 –20 0
k. Electrical machinery
j. Machinery e. Petroleum & coal i. Fabricated metal
l. Transport equipment
f. Nonmetallic
d. Chemicals c. Pulp & paper
m. Precision instruments
gh. Primary metal
t. Other
a. Food b. Texiles
products
10
20
30
40
50
60
70
80
90
100
Shares of Inductries (%)
Fig. 1b Manufacturing GDP skyline analysis (1955–1960)
of agriculture, forestry and fisheries was merely 5.2%. The contribution of service sector, 4.3% was even smaller than that of agriculture, forestry and fisheries. Note that all of growth rates and contributions are values cumulative for 5 years. The growth rate of manufacturing industry, which shared one-third of GDP, was 132.5%, and the relative contribution, 37.7%. The high growth during this period was basically led by manufacturing industry. Figure 1b shows that the leading industries
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
171
a 180 160
Elec., gas and water 2.7×118.1
Growth Rate 5YOY (%)
140
Activities Share×Growth Transport and communications 7.5×107.3 Real estate Service 8.6×131.8 7.8×113.2
Finance and insurance 4.5×160.9
Construction 6.6×141.8
120 100 80 60
Manufacturing 33.7×100.3
Mining 1.0×34.4
Government services 7.0×124.9
Wholesale and retail 12.7×124.4
40 20 0
Agriculture, forestry and fishing 9.8×53.7
0
10
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 2a GDP skyline analysis (1960–1965)
were primary metal including mainly iron and steel, general machinery, electrical machinery, and transport equipment (mainly shipbuilding rather than automobile). As shown in Table 2, the “rationalization” of iron and steel industry started the second half of the 1950s, and its productivity had markedly risen in the industry. The first period (1955–1960) of the high-growth era was a process in which
172
H. Yoshikawa and S. Miyakawa
b 180 160
e. Petroleum & coal
Growth Rate 5YOY (%)
140
f. Nonmetallic
I. Transport equipment k. Electrical machinery
d. Chemicals c. Pulp & paper
120
j. Machinery i. Fabricated metal
100
t.. Other
b. Texiles
80 60
a. Food products
40 20 0
m. Precision instruments
gh. Primary metal
–20 0
10
20
30
40
50
60
70
80
90
100
110
Shares of Inductries (%)
Fig. 2b Manufacturing GDP skyline analysis (1960–1965)
consumer durables such as television, electric washing machine and refrigerator called “three sacred treasures” at that time, were diffused among households. Production of refrigerator, television, and passenger car increased 50 times, 34 times, and 12 times during this period, respectively. While all the industries shared the balanced growth in the postwar reconstruction period (1946–1951), this period showed the unbalanced growth concentrated on primary metal and machinery industries, reflecting vital technical innovations in
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
173
a 200 Elec., gas and water 2.1×75.5
180
Growth Rate 5YOY (%)
160
Construction 7.7×161.7
Mining 0.9×87.4
140
Activities Share×Growth Finance and insurance 4.3×111.6 Transport and communications 6.9×104.9 Real estate 8.0×109.4
120 Manufacturing 36.0×138.2
100
Service 9.7×175.2
Government services 6.3×102.9
Wholesale and retail 14.4×152.4
80 60 40 20 0 0
10
20
30
40 50 60 Shares of Activities (%)
70
80
90
100
110
Fig. 3a GDP skyline analysis (1965–1970)
iron and steel industry on one hand, and the explosions of demand for consumer durables on the other. The contributions of non-manufacturing sectors, such as wholesales/retails, real estate, transportation, and communication, were also substantial, but not so much as manufacturing industry. The substantial contributions of non-manufacturing sectors and the diffusion of consumer durables underline the fact that the high-growth was basically led by domestic demand at least until the second half of the 1960s.
174
H. Yoshikawa and S. Miyakawa
b 240 220
gh. Primary metal
200
j. Machinery
f. Nonmetallic e. Petroleum & coal c. Pulp & paper b. Texiles
180 Growth Rate 5YOY (%)
k. Electrical machinery
160 140 120
l. Transport equipment
t. Other
100 80
d. Chemicals
a. Food products
60 40 20 0
i. Fabricated metal
-20 0
10
20
30
40
50
m. Precision instruments 60
70
80
90
100
Shares of Inductries (%)
Fig. 3b Manufacturing GDP skyline analysis (1965–1970)
(2) 1960–1965 According to the GDP skyline diagrams, the growth pattern of the first half of the 1960s is quite similar to that of the second half of the 1950s. However, Fig. 2b demonstrates significant standard changes within manufacturing industries. The contribution of primary metal including iron and steel which was 18.5% (greatest among all the industries) in the second half of the 1950s, fell to 6.0%
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
175
a 200
Activities Share×Growth
180 160 Growth Rate 5YOY (%)
Government services 8.9×182.8
Finance and insurance 5.3×149.8 Construction 9.7×153.5
140 120 Agriculture, forestry and fishing 5.5×81.4
100 80 60
Transport and communications 6.4×89.2
Elec. ,gas and water 2.0×92.7
Service 11.0×129.7
Wholesale and retail 14.8×108.3
Real estate 8.2×105.8
Manufacturing 30.2×69.7
40 20 0
0
10 Mining 0.5×25.1
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 4a GDP skyline analysis (1970–1975)
during this period. In contrast, the contribution of transport equipment went up from 10.5 to 12.7%. Besides transport equipment, the contribution of chemistry was 10.1%. In addition, food products and beverages (14.4%) and others (11.5%) including clothes, publishing, rubber, etc. made substantial contributions. Although manufacturing was still a leading sector, its contribution declined from 41.3% in the second half of the 1950s to 32.9% in the first half of the 1960s.
176
b
H. Yoshikawa and S. Miyakawa
120 c. Pulp & paper
Growth Rate 5YOY (%)
100
e. Petroleum & coal gh. Primary metal
80
k. Electrical machinery j. Machinery
a. Food products
60
t. Other
l. Transport equipment d. Chemicals
40
20
0 b. Texiles
-20
0
10
f. Nonmetallic
20
m. Precision instruments
i. Fabricated metal
30
40
50
60
70
80
90
100
Shares of Inductries (%)
Fig. 4b Manufacturing GDP skyline analysis (1970–1975)
(3) 1965–1970 During this period, the Japanese economy enjoyed a very long boom (57 months from October, 1965 to July, 1970). The contribution of manufacturing to growth became greatest, 46.6% (the relative contribution, 37.8%). Along with manufacturing, the growth of wholesales/retails, and services were remarkable. The growth of
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
177
a 120
Activities Share×Growth Elec.,gas and water 2.7×119.1
Growth Rate 5YOY (%)
100
Finance and insurance 5.0×54.4
80 Mining 0.6×75.6
60
40
Transport and communications 6.4×62.6
Service 14.4×112.9
Real estate 9.4×86.6 Government services 7.9×45.3
Wholesale and retail Construction 15.3×67.7 9.2×55.2
Manufacturing 28.3×52.0
20
0
0
10
Agriculture, forestry and fishing 3.6×7.8
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 5a GDP skyline analysis (1975–1980)
wholesales/retails during this period reflected the historical process of modernization. Namely, as of 1964, the share of small retail stores was 73% and that of supermarkets 7.7% whereas their respective shares had turned to 63% and 19% by 1974. As shown in the skyline analysis of manufacturing industry, the contribution of primary metal rose again. In this period, “machinery industries” including general
178
b
H. Yoshikawa and S. Miyakawa
100 gh. Primary metal
k. Electrical machinery
m. Precision instruments
Growth Rate 5YOY (%)
80 c. Pulp & paper
e. Petroleum & coal t. Other
60 j. Machinery
a. Food products
40
l. Transport equipment
20
f. Nonmetallic
0
-20
b. Texiles
0
10
d. Chemicals
20
30
i. Fabricated metal
40
50
60
70
80
90
100
Shares of Inductries (%)
Fig. 5b Manufacturing GDP skyline analysis (1975–1980)
machinery, electrical machinery, and transport equipment started playing a central role of manufacturing industry along with iron and steel. Their growth was supported by flourishing capital spending. For example, Table 3 shows the increase in capital spending in petrochemical industry. Real capital spending had risen three times during this period. Note that a peak of capital spending was in 1970 preceding the first oil shock. Investment had already fallen from the peak in 1970 to one-third in 1973 when the first oil crisis occurred.
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
179
a 60
Activities Share×Growth
50
Agriculture, forestry and fishing 3.2×16.2
40 Growth Rate 5YOY (%)
Elec.,gas Transport and and water communications 3.2×56.6 6.7×38.9 Finance and insurance 5.0×33.6
Service 16.6×55.3
Real estate 10.0×42.8
Government services 7.7×31.1
30 20
Manufacturing 28.2×34.1 Wholesale Construction and retail 7.7×12.5 13.2×16.4
10 0 –10
Mining 0.3×-29.7
–20 –30
0
10
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 6a GDP skyline analysis (1980–1985)
The productivity of manufacturing industry increased remarkably thanks to rigorous investment in the second half of the 1960s. This fact was mentioned in the “Economic White Paper 1973” of the Japanese government (Table 4). Iron and steel and machinery industries enjoying extremely high productivity growth, were also industries contributing highly to the economic growth (Fig. 3b). At the same time, they were export industries. This period was a turning point for the Japanese
180
H. Yoshikawa and S. Miyakawa
b 100 k. Electrial machinery
Growth Rate 5YOY (%)
80
d. Chemicals
60
c. Pulp & paper
40
t. Other s. Rubber
e. Petroleum & coal
q. Printing
i. Fabricated metal
a. Food product
p. Furniture m. Precision instruments
f. Nonmetallic
20
l. Transport equipment
j. Machinery
0 b. Texiles
g. Iron & steel
n. Clothes o. Wood
h. Nonferrous
r. Leather
-20 0
10
20
30
40
50
60
70
80
90
100
110
Shares of Inductries (%)
Fig. 6b Manufacturing GDP skyline analysis (1980–1985)
economy in that it shifted from “domestic demand-led” to “export-led” economy. The change is clearly seen in the trend of domestic and foreign demand for automobile industry, Japan’s representative export industry (Fig. 11).
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
181
a 80
Activities Share×Growth
70
Growth Rate 5YOY (%)
Finance and insurance 5.6×54.4
Construction 9.9×73.6
60 50
Elec.,gas and water 2.6×9.1
Agriculture, forestry and fishing 2.5×7.0
40 30
Wholesale and retail 13.3×36.2
Manufacturing 26.7×28.5
20
Real estate 10.6×44.6
Transport and communications 6.6×35.0
Government services 7.3×28.8
Service 16.1×32.1
10 0
0
10 Mining 0.3×17.0
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 7a GDP skyline analysis (1985–1990)
(4) 1970–1975 Japan’s high growth ended around 1970. The main reason was the change of domestic economic conditions rather than the first oil embargo in 1973 (Yoshikawa 1995). Whatever the reason, the growth rate fell substantially during this period. The growth rate of manufacturing industry most significantly fell from
182
H. Yoshikawa and S. Miyakawa
b 60 i. Fabricated metal
50
s. Rubber
j. Machinery
q. Printing
Growth Rate 5YOY (%)
d. Chemicals
40
f. Nonmetallic
30
h. Nonferrous
20
p. Furniture k. Electrical machinery
o. Wood n .Clothes
c. Pulp & paper
m. Precision instruments
b. Texiles
10
0 a. Food products
-10 0
10
e. Petroleum & coal
20
g. Iron & steel
30
r. Leather
l. Transport equipment
40
50
60
70
t. Other
80
90
100
Shares of Inductries (%)
Fig. 7b Manufacturing GDP skyline analysis (19850–1990)
138.2 to 69.7% (cumulative for 5 years). Its contribution to GDP growth was also down from 46.6 to 25.1%. In contrast, the growth rates of construction and finance and insurance were remarkable 153.5 and 149.8%, respectively. Wholesales/retails
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
183
a 40
Finance and insurance 5.9×18.1
Growth Rate 5YOY (%)
30
Elec., gas and water 2.7×18.7
20
Transport and communications 7.1×21.2
Real estate 12.0×27.7
Government services 8.0×22.9
Service 17.7×24.2
Wholesale and retail 15.3×29.9
Agriculture, forestry and fishing 1.9×-14.4
10
Activities Share×Growth
0 Manufacturing 23.1×-2.3 Construction 8.2×-5.9
–10
Mining 0.2×-23.2
–20
–30
0
10
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 8a GDP skyline analysis (1990–1995)
and services also achieved the growth rates exceeding 100%, and their contribution rates were 15.2 and 12.2%, respectively. Therefore, wholesales/retails and services can be said to have become leading sectors next only to manufacturing industry. Within manufacturing industry, the contribution of transport equipment (automobile) is conspicuous. Automobile industry, now a representative export industry, had established itself as the leading industry of the Japanese economy in this period.
184
H. Yoshikawa and S. Miyakawa
b 40 e. Petroleum & coal
30
s. Rubber q. Printing
Growth Rate 5YOY (%)
20 o. Wood
a. Food products
10
f. Nonmetallic
d. Chemicals
j. Machinery
0 c. Pulp & paper
–10
l. Transport equipment
10
p. Furniture
h. Nonferrous
b. Texiles
0
n. Clothes
i. Fabricated metal
g. Iron & steel
–20
–30
m. Precision instruments k. Electrical machinery
20
30
40
50
60
70
80
r. Leather t. Other
90
100
Shares of Inductries (%)
Fig. 8b Manufacturing GDP skyline analysis (1990–1995)
In contrast, the contributions of primary metal, chemistry, and petroleum and coal products all declined. The contributions of general machinery and electrical machinery also declined substantially.
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
185
a 20 Activities Share×Growth
15
Growth Rate 5YOY (%)
10 Agriculture, forestry and fishing 1.8× –4.8
5 0
Elec. ,gas and water 2.7×1.9
Wholesale and retail 14.1×–6.8
Manufacturing 22.2× –2.8
–5 –10
Government services 9.1×16.2
Service 20.4×16.4
Finance and insurance 6.1×3.9
Transport and communications 6.9× –1.3
Real estate 11.5× –3.1
Construction 7.4× –9.1
–15 Mining 0.1× –27.2
–20 –25 –30
0
10
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 9a GDP skyline analysis (1995–2000)
(5) 1975–1980 The growth rate of the Japanese economy declined permanently after the first oil crisis. The contribution to GDP growth of service sector rose to 12.4%, comparable to 15.7% of manufacturing industry. The contribution of wholesales/retails was also substantial.
186
b
H. Yoshikawa and S. Miyakawa
20 e. Petroleum & coal
Growth Rate 5YOY (%)
10
a. Food products
h. Nonferrous
k. Electrical machinery j. Machinery
c. Pulp & paper
0
m. Precision instruments
o. Wood q. Printing
l. Transport equipment d. Chemicals
–10
s. Rubber i. Fabricated metal
f. Nonmetallic
–20
g. Iron & steel
b. Texiles
p. Furniture
–30
n. Clothes
–40
r. Leather t. Other
–50
0
10
20
30
40
50
60
70
80
90
100
Shares of Inductries (%)
Fig. 9b Manufacturing GDP skyline analysis (1995–2000)
Within manufacturing industry, unlike the first half of the 1970s, the contribution of transport equipment declined whereas primary metal such as iron and steel and electrical machinery went up. Although textile expanded until the first half of
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
187
a 20
Finance and insurance 7.0×15.5
15
Activities Share×Growth
Growth Rate 5YOY (%)
10 Agriculture, forestry and fishing 1.5×–15.6
5 0
Wholesale and retail 13.8×–2.2
–15
Construction 6.3×–14.6
Mining 0.1×–21.1
–20 –25
Transport and communications 6.9×–0.8
Elec.,gas and water 2.4×–11.7
–10
0
10
Service 21.5×4.9
Real estate 12.0×4.1
Manufacturing 21.0×–5.6
–5
Government services 9.4×2.6
20
30
40
50
60
70
80
90
100
110
Shares of Activities (%)
Fig. 10a GDP skyline analysis (2000–2005)
the 1970s, the growth rate cumulative for 5 years of the second half of the 1970s fell to 12.3%, becoming the first minus growth industry. The textile industry which had played such an important role in the Japanese economy from the Meiji era through the postwar period finally lost the place of leading industry in this period.
188
b
H. Yoshikawa and S. Miyakawa
40 g. Iron & steel
30
e. Petroleum & coal
20 Growth Rate 5YOY (%)
l. Transport equipment o. Wood
d. Chemicals
10
k. Electrical machinery
j. Machinery
a. Food products
0 –10
m. Precision instruments
f. Nonmetallic b.
–20
h. Nonferrous
c. Pulp & paper
–30
q. Printing s. Rubber
p. Furniture
i. Fabricated metal
r. Leather
–40 n. Clothes
–50 –60
0
10
20
30
40
50
60
Shares of Inductries (%)
Fig. 10b Manufacturing GDP skyline analysis (2000–2005)
70
80
t. Other
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
189
Table 2 Labor productivity in iron and steel (hours)a Open-hearth Blast furnace iron furnace steel Converter steel 1951 1.77 3.01 – 1952 1.71 2.93 – 1953 1.45 2.68 – 1954 1.36 2.46 – 1955 1.25 2.12 – 1956 1.07 2.00 – 1957 0.98 1.85 – 1958 0.91 1.83 – 1959 0.75 1.69 – 1960 0.66 1.54 0.75 1961 0.53 1.46 0.69 1962 0.48 1.57 0.68 1963 0.44 1.56 0.58 1964 0.38 1.36 0.48 1965 0.35 1.60 0.48 1966 0.30 1.62 0.42 1967 0.25 1.52 0.38 1968 0.21 1.72 0.38 1969 0.17 1.60 0.34 1970 0.16 1.87 0.34 a Hours required in “the direct process” excluding transportation, materials, analytical checks and maintenances Source: Ministry of Labor “Statistical Report on Labor Productivity”
(6) 1980–1985 The growth rate of GDP fell sharply to 34.3% in the first half of the 1980s from 62.5% in the second half of the 1970s. The contributions of manufacturing industry and services were substantial as in the first half. The contribution rate of manufacturing industry was 28.1%, and that of service was 23.2%. These two sectors, therefore, accounted for more than half of GDP growth rate. The growth of wholesales/retails decelerated. As for manufacturing industries, the contribution of electrical machinery was striking. Contemporary observers all noted the importance of growing electrical machinery, particularly the key role played by semiconductor. Besides, general machinery and transport equipment contributed substantially to the growth of the Japanese economy. Three machinery industries are all export industries. This period was the time when the current balance surplus of Japan rose with the help of the strong dollar due to the US budget deficits under the Reagan administration. The trade frictions between Japan and the USA unprecedentedly intensified.
190
H. Yoshikawa and S. Miyakawa
Table 3 Capital spending in petrochemicals Ethylene capacity Capital spending Fiscal year (1,000 t) (Mil. Yen) 1956 0 8,349 1957 0 24,017 1958 43 23,396 1959 115 27,555 1960 115 38,494 1961 160 66,435 1962 316 55,904 1963 378 62,017 1964 633 91,229 1965 1,080 110,921 1966 1,190 77,202 1967 1,565 109,215 1968 1,970 202,837 1969 2,480 216,547 1970 4,010 274,299 1971 4,330 251,762 1972 4,980 152,467 1973 4,980 140,195 1974 5,065 240,818 1975 5,145 280,650 1976 5,185 226,853 1977 5,215 192,680 1978 5,235 110,643 1979 6,079 117,016 1980 6,257 200,614 Real capital spending is indexed as 1970 ¼ 100 after capital spending is price index. Source: Watanabe and Saeki “Petrochemical Industry on a turning-point”
Real capital spending (1970 ¼ 100) 3.3 9.5 9.9 11.5 15.9 27.2 23.3 25.4 37.3 45.0 30.6 42.5 78.3 81.8 100.0 92.5 55.6 44.1 57.7 65.3 50.0 41.9 24.7 24.3 35.4 deflated by wholesale
(7) 1985–1990 In response to the expansion of the current balance surplus and the serious trade frictions in the first half of the 1980s, in the “1986 Maekawa Report” set the “national policy” of economic growth led by domestic demand. Meanwhile, after the Plaza Accord of the autumn of 1985, the yen had appreciated from 240 yen per dollar at the beginning of 1985 to 120 yen in 1988. Japan plunged into the “bubble” economy supported by the low interest rate and other policy measures for stimulating domestic demand. The “bubble economy” is reflected in the unprecedented high contributions of construction and real estate to GDP growth, that is, 5.7 and 4.5%, respectively. The sum of the contributions of construction and real estate amounts to 10.2%, exceeding 8.0% of manufacturing. The growth rate of manufacturing industry remained 28.5%, while that of real estate was spectacular 73.6%. During this “bubble” period, the Japanese economy was basically led by construction and real estate.
Paper and Pulp
Textiles Industrial machinery Non-ferrous metals
8
9
14 15
13
12
11
10
6 7
Clothing Stone, clay and glass Fabricated metals Foodstuffs
Iron and steel Precision machinery Electric machinery Rubber products
4
5
73.2 45.1
Transportation Chemicals
2 3
11.9 0.3
20.1
20.5
25.7
27.0
28.4
29.6
33.6 31.0
37.1
23.6
24.7 24.3
25.8
28.2
34.1 29.3
34.8
23.7
Leather
Clothing
Foodstuffs
Lumber Stone, Clay and glass
11.1
17.8
18.0
19.3
20.7
Transportation Industrial machinery Electric machinery
Iron and Steel
Foods tuffs Clothing Non-ferrous metals
Chemicals
Shipbuilding Textiles Lumber and paper
Stone and clay
Mfg.Average
Productivity increase (%) Industries
Fabricated metals 22.5
Paper product
Textiles Industrial and electric Rubber and plastic Transportation
Chemicals Petroleum and coal Primary metals
89.0
37.8
Mfg. Average
Productivity increase (%) Industries
30.1
Industries
First half of 1960’s (Ave.) Mfg.Average Petroleum and 1 coal
Rank
Table 4 Structure of labor productivity by industries in Japan, USA, Germany and UK Japan USA West Germany
2.9
4.6
5.5
7.0
10.0
15.0 14.9
29.6
29.6
35.3 31.2
41.8
17.4
Fabricated metals
Primary metals
Shipbuilding
Leather
Clothing Foodstuffs Paper and printing Industrial and electric
Lumber
Textiles
Transportation Stone and clay
Chemicals
Mfg.Average
Productivity increase (%) Industries
UK
(continued)
6.9
7.8
10.5
13.0
13.7
14.5
17.0 14.8
17.3
18.9
24.3 22.4
35.6
17.3
Productivity increase (%)
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences 191
Paper and Pulp Transportation Precision machinery
80.6
15.2
17.5
21.6
7.9
Lumber Industrial and electric
6.5
Paper product 10.2 Fabricated metals 9.4 Stone, clay and glass 8.7 Rubber and Plastic 7.4 Textiles 7.2
Foodstuffs Petroleum and coal
Chemicals
Mfg.Average
West Germany
Stone and clay Industrial machinery
Foodstuffs Transportation
Iron and steel Lumber and paper Textiles Non-ferrous metals
Chemicals Electric machinery
Mfg.Average
Productivity increase (%) Industries
UK
22.0
24.1 23.7
25.3
33.5 29.2
37.5
39.2
58.7
25.6
Primary metals
Clothing Paper and Printing Transportation
Stone and clay Foodstuffs
Textiles Industrial and electric
Chemicals
Mfg.Average
Productivity increase (%) Industries
3.6
5.6 5.2
7.3
12.5 8.8
25.5
31.9
46.7
14.6
Productivity increase (%)
Textiles 76.0 0.0 20.3 Leather 0.8 Petroleum and 11 coal 58.6 Clothing 2.2 Shipbuilding 15.9 Lumber 0.5 Petroleum and 12 coal 58.6 Primary metals 3.1 Clothing 11.4 Shipbuilding 0.8 Stone, Clay and 13 glass 56.8 Transportation 4.0 Fabricated metals 9.3 14 Foodstuffs 25.8 Leather 4.8 15 Clothing 11.8 Labor productivity is defined as production index divided by worker or employed. Source: 1973FY “White Paper on Economics” increases (%) are the rate of increase in accumulated points in 5 years
10
9
86.3 85.4
88.9
108.7 91.7
119.7
7 8
6
4 5
3
123.3
Iron and steel Electric machinery Fabricated metals Chemicals Non-ferrous metals
2
89.0
168.4
Industries
USA
Productivity increase (%) Industries
Second half of 1960s (Ave.) Mfg.Average Industrial 1 machinery
Rank
Table 4 (continued) Japan
192 H. Yoshikawa and S. Miyakawa
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
Units (1000)
193
Car Demand and Production
Production
Domestic Demand
Export
Year
Fig. 11 Car demand and production Source: “Automobile Industry Handbook 1985”
In the bubble economy, the growth of transport equipment, the leading export industry, decelerated. On the other hand, the growth of fabricated metal products including the cable for communication and the aluminum sash for housing was remarkably high (54.1%), followed by electrical machinery (46.6%) and general machinery (43.8%). Among others, the growth of chemicals was also high (40.8%).
(8) 1990–1995 The real estate bubble boom of the second half of the 1980s ended at the beginning of 1990s, and the Japanese economy entered the deep recession. The growth of nominal GDP fell to 12.9%, the historically lowest level ever. In particular, it was the first experience for the Japanese economy that the whole manufacturing industry fell to the minus growth. In addition, agriculture, forestry and fisheries, mining, and construction also showed the minus growth. The Japanese economy of this period maintained the low-growth barely by the positive growth of tertiary sector such as wholesales/retails, real estate, and services. Figure 8b shows the performance of manufacturing industry which experienced the minus growth. Almost all industries dropped down to the negative growth, although food, petroleum and coal, chemicals, printing, etc. maintained the slightly
194
H. Yoshikawa and S. Miyakawa
positive growth. Electrical machinery which was a leading industry had almost zero economic growth. Compared with non-manufacturing sectors such as construction and wholesales/retails, the ratios of nonperforming loans in manufacturing industry were low. However, Fig. 8b shows that the deep recession in the 1990s was basically brought about by unprecedented stagnation of manufacturing industry.
(9) 1995–2000 The recession worsened further in the latter half of 1990s. It turned out that this prolonged economic slump was eventually called the “Lost Decade” (Yoshikawa 2008). Japan experienced the serious financial crisis in 1997–1998; See (Motonishi and Yoshikawa 1999) for details of credit crunch during this period. The nominal GDP growth rate during this period fell further from 12.9 to 1.2% in the first half of the 1990s. Not only manufacturing industry continued to experience the minus growth, the tertiary sector such as wholesales/retails, real estate, transportation, communication and construction also fell into the negative growth. Only service sector maintained the positive economic growth. It was the growth of the public sector and service sector that barely sustained the Japanese economy during the period. Only food products and electrical machinery showed the positive contributions among manufacturing industries. Textiles, clothes, and leather products lapsed into declining industries because of the flood of cheap import mainly from Asia triggered by the strong yen. Moreover, iron and steel, non-metallic mineral products, fabricated metal products, lumber, and furniture declined by more than 10% due to the slump of construction for housing and building by the collapse of “bubble” economy.
(10) 2000–2005 The real GDP of the Japanese economy had resumed expansion since 2003, but because of the deflation the growth rate of nominal GDP stagnated at 0.3% cumulative for 5 years, the first experience for the postwar Japanese economy. Although finance and insurance, real estate, and services maintained the positive economic growth, all other sectors including manufacturing industry fell into the negative growth. In 2005 the share of service sector became 21.5%, exceeding 21.0% of manufacturing industry. “Manufacturing mining, and energy” ceded sector’s top position to “financial intermediation, real estate, and business activities” during the second half of the 1980s in the USA, during the first half of the 1990s in the UK, and during the second half of the 1990s in Germany. In contrast, in Japan “finance and insurance plus real estate” which finally recovered from the financial crisis had not become the leading sector. As shown in the manufacturing skyline analysis, the contributions of iron and steel and transport equipment went up in spite of the minus growth of almost other industries.
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
195
The growth of iron and steel was thanks to the expanding Asian economy including China and India. The same story holds true for transport equipment. This fact shows that the economic recovery from the beginning of 2002 was mainly export-led.
International Comparison In this section, we analyze the growth patterns (1970–2005) of the USA (Figs. 12–18), the UK (Figs. 19–25), and Germany (Figs. 26–32) using the same “skyline diagrams.” The following facts are found in comparison with the growth pattern of the Japanese economy presented in the previous section. 100 Activities Share×Growth
90 1.Agriculture, hunting and forestry; fishing 4.0×82.9
Growth Rate 5YOY (%)
80 70 60
3. Construction 4.9×47.1
50
4. Wholesale and retail trade, hotels and restaurants; transport 23.6×57.2
2. Industry, including energy 27.7×50.6
40
5. Financial intermediation and real estate 17.5×56.3
6. Other service activities 22.0×60.9
30 20 10 0
0
10
20
30
40 50 60 Shares of Activities (%)
Fig. 12 US GDP sky analysis (1970–1975)
70
80
90
100
196
H. Yoshikawa and S. Miyakawa 100 Activities Share×Growth
90
1. Agriculture, hunting and forestry; fishing 2.9×22.7
Growth Rate 5YOY (%)
80
3. Construction 4.9×71.5
5. Financial intermediation and real estate 19.7×91.9
70 60
4. Wholesale and retail trade, hotels and restaurants; transport 23.0×65.8
50
6. Other service activities 21.1×63.7
40 2. Industry, including energy 28.3×74.4
30 20 10 0
0
10
20
30
40
50
60
70
80
90
100
Shares of Activities (%)
Fig. 13 US GDP sky analysis (1975–1980)
1. The growth of the USA and the UK were much more stable than the Japanese economy. Except for agriculture, forestry and fisheries whose share is small, no sector has fallen into the minus growth cumulative for 5 years in the USA and the UK. This fact confirms that the “Lost Decade” of the Japanese economy was an extremely unusual phenomenon among the advanced economies. 2. Unlike the Japanese economy, the USA and the UK economies were basically led by non-manufacturing sectors. In passing, it is necessary to note that petroleum industry accounts for a significant share of “industry, including energy” in the USA and the UK. For example, in the UK, the contribution of “industry, including energy” dramatically rose in the second half of 1970s; It must have been caused by the development of the North Sea oil fields.
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
197
80 Activities Share×Growth
70
Growth Rate 5YOY (%)
60
5. Financial intermediation and real estate 22.3×71.2
3. Construction 4.7×44.2
50 1. Agriculture, hunting and forestry; fishing 2.4×26.0
40
4. Wholesale and retail trade, hotels and restaurants; transport 23.1×51.3
6. Other service activities 21.4×53.2
30 2. Industry, including energy 26.0×38.3
20 10 0
0
10
20
30
40
50
60
70
80
90
100
Shares of Activities (%)
Fig. 14 US GDP sky analysis (1980–1985)
3. Among non-manufacturing sectors, financial intermediation, real estate, and business activities make substantial contribution in the USA and the UK. Such a trend was particularly notable during the period of 1975–1980 and 1995–2000 in the USA. Finance and real estate kept the role of leading sector continuously from 1975 to 2005 in the UK. As of 2005, their shares in GDP are 32.5% in the USA, and 31.8% in the UK. This means that one-third of GDP is now generated by finance and real estate in the USA and UK. In Japan, it remains 19.0% (2005). 4. Like Japan, Germany is often considered to be an economy led by manufacturing industry. However, it has actually changed to the growth pattern led by “nonmanufacturing sectors” since 1970s. As in the cases of the USA and the UK, finance and real estate is now a leading sector, its share being 29% in 2005.
198
H. Yoshikawa and S. Miyakawa 80 Activities Share×Growth
70
Growth Rate 5YOY (%)
60 50 40 1. Agriculture, hunting and forestry; fishing 2.1×18.0
30 20
5. Financial intermediation and real estate 24.8×51.5
3. Construction 4.6×32.6
6. Other service activities 23.2×48.1
4. Wholesale and retail trade, hotels and restaurants; transport 21.9×29.6
2. Industry, including energy 23.5×23.3
10 0
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 15 US GDP sky analysis (1985–1990)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
199
80 Activities Share×Growth
70
Growth Rate 5YOY (%)
60 50 40
3. Construction 4.2×16.5
30 4. Wholesale and retail trade, hotels and restaurants; transport 22.2×29.6
20 2. Industry, including energy 22.2×20.7
10 0 –10
5. Financial intermediation and real estate 26.3×35.2
6. Other service activities 23.4×28.6
1. Agriculture, hunting and forestry; fishing 1.6×-0.9
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 16 US GDP sky analysis (1990–1995)
70
80
90
100
200
H. Yoshikawa and S. Miyakawa 80% Activities Share×Growth
70%
Growth Rate 5YOY (%)
60%
3. Construction 4.7×51.6
50%
5. Financial intermediation and real estate 31.6×61.4
40% 1. Agriculture, hunting and forestry; fishing 1.2×2.4
30%
6. Other service activities 23.2×33.1
20%
0%
4. Wholesale and retail trade, hotels and restaurants; transport 19.7×18.9
2. Industry, including energy 19.4×17.2
10%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 17 US GDP sky analysis (1995–2000)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
201
80% Activities Share×Growth
70%
Growth Rate 5YOY (%)
60% 50%
1. Agriculture, hunting and forestry; fishing 1.3×32.6
40%
3. Construction 5.2×38.7
30%
6. Other service activities 24.9×35.4
20% 10% 0%
2. Industry, including energy 17.2×11.6
0
10
5. Financial intermediation and real estate 32.5×29.6
4. Wholesale and retail trade, hotels and restaurants; transport 19.0×21.5
20
30
40
50
60
Shares of Activities (%)
Fig. 18 US GDP sky analysis (2000–2005)
70
80
90
100
202
H. Yoshikawa and S. Miyakawa 180% Activities Share×Growth
160%
3. Construction 7.1×143.9
Growth Rate 5YOY (%)
140%
6. Other service activities 21.2×168.1
1. Agriculture, hunting and forestry; fishing 2.7×101.4
120% 100%
4. Wholesale and retail trade, hotels and restaurants; transport 21.6×111.7
80% 2. Industry, including energy 32.0×87.9
60%
5. Financial intermediation and real estate 15.3×105.1
40% 20% 0%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 19 UK GDP sky analysis (1970–1975)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences 160%
Activities Share×Growth
1. Agriculture, hunting and forestry; fishing 2.1×66.0
140%
5. Financial intermediation and real estate 17.7×146.0
Growth Rate 5YOY (%)
120% 3. Construction 6.1×82.4
100% 2. Industry, including energy 34.6×130.8
80%
203
6. Other service activities 20.3×103.9
4. Wholesale and retail trade, hotels and restaurants; transport 19.2×89.9
60% 40% 20% 0%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 20 UK GDP sky analysis (1975–1980)
70
80
90
100
204
H. Yoshikawa and S. Miyakawa 90% Activities Share×Growth
80%
Growth Rate 5YOY (%)
70% 60%
3. Construction 5.6×40.4
1. Agriculture, hunting and forestry; fishing 1.7×23.4
50%
4. Wholesale and retail trade, hotels and restaurants; transport 19.3×52.7
40% 2. Industry, including energy 32.7×43.2
30%
5. Financial intermediation and real estate 20.8×78.8
6. Other service activities 19.9×49.0
20% 10% 0%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 21 UK GDP sky analysis (1980–1985)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
205
90 3. Construction 6.7×85.5
80
Activities Share× Growth
Growth Rate 5YOY (%)
70 1. Agriculture, hunting and forestry; fishing 1.8×66.2
60 50
4. Wholesale and retail trade, hotels and restaurants; transport 21.6×74.8
40
5. Financial intermediation and real estate 21.9×64.4
6. Other service activities 20.6×61.5
30 2. Industry, including energy 27.3×30.3
20 10 0
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 22 UK GDP sky analysis (1985–1990)
70
80
90
100
206
H. Yoshikawa and S. Miyakawa
50
Activities Share×Growth
Growth Rate 5YOY (%)
40 5. Financial intermediation and real estate 24.0×39.7
1. Agriculture, hunting and forestry; fishing 1.9× 28.4
30
20
4. Wholesale and retail trade, hotels and restaurants; transport 21.7×28.2
2. Industry, including energy 25.9× 21.2
6. Other service activities 21.5× 33.2
10
0 3. Construction 4.9×–5.6
–10
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 23 UK GDP sky analysis (1990–1995)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
207
50% 40%
4. Wholesale and retail trade, hotels and restaurants; transport 23.1× 40.4
30% Growth Rate 5YOY (%)
Activities Share× Growth
3. Construction 5.2×38.3
20%
5. Financial intermediation and real estate 27.1× 48.9 6. Other service activities 21.6 × 32.6
2. Industry, including energy 22.1×12.4
10% 0% –10%
1. Agriculture, hunting and forestry; fishing 1.0× –28.2
–20% –30%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 24 UK GDP sky analysis (1995–2000)
70
80
90
100
208
H. Yoshikawa and S. Miyakawa 60%
Activities Share× Growth 3. Construction 6.0×49.7
Growth Rate 5YOY (%)
50%
5. Financial intermediation and real estate 31.8× 50.8
40%
1. Agriculture, hunting and forestry; fishing 1.0 × 23.6
30%
20%
4. Wholesale and retail trade, hotels and restaurants; transport 22.2 × 23.4
10%
0%
6. Other service activities 20.0 × 19.0
2. Industry, including energy 19.0 ×10.4
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 25 UK GDP sky analysis (2000–2005)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences 100%
Activities Share×Growth
90%
6. Other service activities 21.2×99.8
80% Growth Rate 5YOY (%)
209
5. Financial intermediation and real estate 15.7×88.8
70% 60% 1. Agriculture, hunting and forestry; fishing 3.0×29.1
50% 40%
3. Construction 7.0×22.9
2. Industry, including energy 35.2×37.6
30%
4. Wholesale and retail trade, hotels and restaurants; transport 18.8×49.7
20% 10% 0%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 26 German GDP sky analysis (1970–1975)
70
80
90
100
210
H. Yoshikawa and S. Miyakawa 60% Activities Share×Growth
Growth Rate 5YOY (%)
50% 1. Agriculture, hunting and forestry; fishing 2.2 ×6.3
40%
3. Construction 7.7 ×56.1
5. Financial intermediation and real estate 17.1×55.5 4. Wholesale and retail trade, hotels and restaurants; transport 18.5× 39.9
30% 2. Industry, including energy 33.9×36.9
20%
6. Other service activities 21.3 × 43.0
10%
0% 0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 27 German GDP sky analysis (1975–1980)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
211
60% Activities Share×Growth
Growth Rate 5YOY (%)
50%
40% 5. Financial intermediation and real estate 20.2× 49.0
1. Agriculture, hunting and forestry; fishing 1.8×2.8
30%
20%
2. Industry, including energy 33.2× 23.8
6. Other service activities 21.5 × 27.1
4. Wholesale and retail trade, hotels and restaurants; transport 17.6× 20.2
10%
0% 3. Construction 6.0 ×–2.4
–10%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 28 German GDP sky analysis (1980–1985)
70
80
90
100
212
H. Yoshikawa and S. Miyakawa 60% Activities Share× Growth
Growth Rate 5YOY (%)
50% 1. Agriculture, hunting and forestry; fishing 1.7×19.9
40%
3. Construction 6.1× 36.1
30% 4. Wholesale and retail trade, hotels and restaurants; transport 17.5 × 31.8
20% 2. Industry, including energy 31.6 × 25.8
10%
0%
0
10
20
30
40
50
5. Financial intermediation and real estate 22.3 × 46.1 6. Other service activities 20.9 × 29.0
60
Shares of Activities (%)
Fig. 29 German GDP sky analysis (1985–1990)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences 60
213
Activities Share × Growth
Growth Rate 5YOY (%)
50 5. Financial intermediation and real estate 26.4× 54.8
3. Construction 6.8 × 44.5
40
30
20
1. Agriculture, hunting and forestry; fishing 1.3×0.7
10
0
6. Other service activities 22.2 ×38.9
4. Wholesale and retail trade, hotels and restaurants; transport 18.0× 34.0
2. Industry, including energy 25.4 × 5,3
0
10
20
30
40 50 60 Shares of Activities (%)
Fig. 30 German GDP sky analysis (1990–1995)
70
80
90
100
214
H. Yoshikawa and S. Miyakawa 20%
Activities Share×Growth
1. Agriculture, hunting and forestry; fishing 1.3 ×10.3
15%
Growth Rate 5YOY (%)
10%
4. Wholesale and retail trade, hotels and restaurants; transport 18.2 ×12.3
2. Industry, including energy 25.1× 9.6
5%
5. Financial intermediation and real estate 27.5 ×15.7
6. Other service activities 22.8 ×13.9
0% 3. Construction 5.2×–14.8
–5% –10% –15% –20%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 31 German GDP sky analysis (1995–2000)
70
80
90
100
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences 20%
215
Activities Share × Growth
15%
Growth Rate 5YOY (%)
10%
2. Industry, including energy 25.8 × 12.7
5. Financial intermediation and real estate 29.1×15.6
4. Wholesale and retail trade, hotels and restaurants; transport 18.1× 8.6
5% 0% –5%
6. Other service activities 22.3 × 7.0
3. Construction 3.8 × –18.8
–10% –15% 1. Agriculture, hunting and forestry; fishing 0.9 × –24.0
–20% –25% –30%
0
10
20
30
40
50
60
Shares of Activities (%)
Fig. 32 German GDP sky analysis (2000–2005)
70
80
90
100
216
H. Yoshikawa and S. Miyakawa
Theoretical Discussion We have demonstrated that the industrial structure drastically changed in the process of economic growth in the postwar Japan. The similar changes in industrial structure apply more or less to all the growing economies. These changes are certainly the outcomes of interactions of both demand and supply factors. On the demand side, income elasticity differs across goods and services. The non-homothetic utility function and the resulting Engel Law bring economy into changes in industrial structure. On the supply side, differences in scale economies across industries are an important factor generating the unbalanced growth (Young 1928; Kaldor 1981). One must recall that for an industry to enjoy increasing returns, the growth of demand is a necessary ingredient. In the mainstream micro-founded macroeconomics, the unbalanced growth had been largely ignored. The common assumptions of representative economic agents such as the Ramsey consumer and “symmetric equilibrium” are prone to exclude the unbalanced growth. It is only recently that mainstream macroeconomics pays explicit attention to the unbalanced growth. Although it is a welcome, the standard assumptions restrain one from analyzing important aspects of the unbalanced growth. For example, the representative consumer by assumption precludes one from introducing diffusion of demand in the economy as a whole. We can recall the fact that the diffusion of consumer durables played such a vital role in Japan’s high growth era during the latter half of 1950s and 1960s. On the supply side, the traditional framework naturally focuses on changes in factor endowment due to capital accumulation for explaining the unbalanced growth (Acemoglu and Guerrieri 2008). However, as we documented in this chapter, marked changes in industrial structure often occur within a relatively short period of time, as short as 5 years. It is then difficult to believe that such changes are caused by changes in factor price due to capital accumulation. Sector-specific innovations more plausibly explain changes in industrial structure which occur within a short period of time. Beyond that, any explanation of changes in industrial structure by way of changes in factor endowment must clear the same challenge of the Leontief Paradox in the theory of international trade. We believe that overall, the demand-side explanation makes more sense than the supply-side explanation. Here, the distinction between the non-homothetic utility function and demand saturation remains an unresolved problem. Demand for many existing goods and services saturates for individual households. Accordingly, demand for such goods and services also saturates in the economy as a whole as they diffuse across households. (Fisher and Pry 1971) fully established that demand for individual goods saturates. Goods and services, and accordingly sectors producing such goods and services necessarily experience their own demand life cycles. These life cycles of demand generate the unbalanced growth. (Aoki and Yoshikawa 2002, 2007) present a growth model based on demand
Changes in Industrial Structure and Economic Growth: Postwar Japanese Experiences
217
Fig. 33 Saturation of demand and emergence of new final goods or industries
saturation and demand-creating innovations; Figure 33 illustrates saturation of demand and emergence of new goods or sectors. Demand-creating innovations necessarily bring about the unbalanced growth, and thereby changes in industrial structure. Acknowledgements We gratefully acknowledge support by Research Institute of Economy, Trade & Industry, IAA(RIETI).
References Acemoglu D, Guerrieri V (2008) Capital deepening and nonbalanced economic growth. J Polit Econ 116(3):467–498 Aoki M, Yoshikawa H (2002) Demand saturation/creation and economic growth. J Econ Behav Organ 48:127–154 Aoki M, Yoshikawa H (2007) Reconstructing macroeconomics: a perspective from statistical physics and combinatorial stochastic processes. Cambridge University Press, Cambridge Chenery H (1960) Patterns of industrial growth. Am Econ Rev 50(4):624–654 Clark C (1957) The conditions of economic progress, 3rd edn. Macmillan, London Fisher J, Pry R (1971) A simple substitution model of technological change. Technol Forecast Soc Change 3:75–88
218
H. Yoshikawa and S. Miyakawa
Kaldor N (1981) The role of increasing returns, technical progress and cumulative causation in the theory of international trade and economic growth. Econ Appl 34(6):593–617 Kuznets S (1959) Six lectures on economic growth. Free, New York Maddison A (1995) Monitoring the world economy 1820–1992. Development Centre of OECD, Paris Motonishi T, Yoshikawa H (1999) Causes of the long stagnation of Japan during the 1990s: financial or real? J Jpn Int Econ 13:181–200 Ohkawa K, Rosovsky H (1973) Japanese economic growth. Stanford University Press, Stanford Ozaki I (2004) Industrial structure of the Japanese economy. Keio University Press, Tokyo Rostow W (1990) The stages of economic growth, 3rd edn. Cambridge University Press, Cambridge Schumpeter J (1934) Theory of economic development. Harvard University Press, Cambridge, MA Schumpeter J (1939) Business cycle. McGraw-Hill, New York Yoshikawa H (1995) Macroeconomics and the Japanese economy. Oxford University Press, Oxford Yoshikawa H (2008) Japan’s lost decade, revised and expanded edition. I-House, Tokyo Young A (1928) Increasing returns and economic progress. Econ J 38:527–542
The Mesoeconomics of Social Industries Benoit Pierre Freyens
Introduction: What Role for Economics in the Social Sector? Meso-level analysis focuses on the characteristics and dynamics of intermediary structures. In the context of economics, meso-analysis studies intermediary layers of economic activity (industries, or sectors of activity) sitting in between the efficient operations of the firm (micro) and those of sovereign nations (macro). These internal economies rely extensively on the private sector and on market forces for their operations, particularly in industrial sectors such as manufacturing, and construction (Andersson 2003). Interest in mesoeconomics is also large for high-profile areas of service provision such as utilities or telecommunications (Laffont and Tirole 1993). In some of these service industries there are services, which typically will not be provided (or would be provided imperfectly) by market forces due to some source of market failure (free riding, positive externalities, credit constraints, etc.). An important segment of these market deficiencies in facilitating the exchange of services takes place in what I will call “social industries”. Examples of social industries include health care, corrective services, family services, age care, public housing, disability services, welfare benefits, employment services, public education, and many more. Social industries are distinct sectors, existing in their own right with their own policy settings, “consumers” demand, cost drivers, labour markets, skill and resource bottlenecks, and service delivery models. The mesoeconomics of social industries, as a field of research, deals with the nuts and bolts of delivering social services to public “clients” (children, patients, students, customers, patrons, etc.) in an efficient and cost effective way. Potentially, the scope for social mesoeconomics is very large as economic questions abound in the social sector. What are the principal capacity, institutional and price constraints of social industries? What is the extent of over- or under-supply in the underlying B.P. Freyens (*) University of Wollongong, Wollongong, NSW, Australia e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_8, # Springer-Verlag Berlin Heidelberg 2011
219
220
B.P. Freyens
labour markets? How “mature” are social industries? What are the key cost drivers in selected industries (technology, assets, labour)? What scope exists for genuine competition/marketisation in these industries? When competition is introduced, what are the main trade-offs of outsourcing services: cost cutting, quality of service, access to services, input or output efficiency? These fundamental questions are pivotal to the ultimate performance of all social policies. With proper answers to these questions, Governments and their agencies might seek to optimise potential synergies between different social sectors – for example, between education and labour or health market programs, or fine-tune their implementation strategies to achieve a better quality of care – say, in relation to homelessness. Governments may also design optimal ways to encourage behavioural change – for example, to discourage risk-taking behaviour, or overconsumption of diagnostic services, or overreliance on welfare benefits – or bring about cultural or structural change in particular social industries – such as through regulation or de-regulation. Governments may also use economic methods to bring about diversification in the production of services in the belief that finely tailored, more localised services will better match the aspirations of their constituencies. In the following sections I briefly explain (1) why social sector mesoeconomics have been a long time coming (the “make or buy” decision for Governments), (2) what are social quasi-markets and the opportunities they represent for the use of economics in social industries, and (3) what are the main socio-economic issues arising from the use of quasi-markets and how they reflect on the make or buy decision.
“Make or Buy” in the Social Sector Up until the early 1990s the provision of most social services was still ruled by government fiat everywhere,1 with little role for competition, markets, and economic methods. To the extent that all services in a given social industry are provided directly by government resources (staff, facilities, etc.), on a universal basis, and funded through citizen-taxpayer contributions with minor (or no) citizenclient co-contributions, then the stage is taken by cost-accounting and project management methods, with economics a bystander. Public administrators will plan for and provide social services in ways that are consistent with their budget appropriations and departmental resources, using methods that conform with norms (legal and procedural rules) promoted by their respective national audit offices. 1
An obvious exception is health care where contracting-out approaches had been widely used prior to the 1990s (Chalkley and Malcomson 1998). Although health care is a social industry, its peculiarities (derived demand, open end services) and distinctness as a very large industry separates it from most other social industries, and much of the discussion in this paper mostly refers to other types of social industries.
The Mesoeconomics of Social Industries
221
In principle then, direct public provision can proceed without much consideration to key economic forces such as the recipients’ willingness to pay for the service, the opportunity cost of the funding used to support the services, the distortions created by the services within the social industry considered, the matter of proper incentives amongst delivery staff, the market conditions prevailing for similar services elsewhere, etc. This stalemate has long been deplored by the “Public Choice” school of economics (Tullock 1965; Niskanen 1971; Brennan and Buchanan 1980; Tullock 1987; Downs 1998; Buchanan and Musgrave 1999). Ignoring economic forces in service delivery is akin to turning a blind eye on the overall performance of social industries (better health, better job placements, better welfare standards for old age, better employment conditions for disability, etc.). This is not to say that, up until the early 1990s, direct public provision was the sole approach considered for service delivery. Since the origins of the welfare state, many Governments did provide social services indirectly by funding networks of third sector organisations (charities, foundations, faith- or mission-based agencies, etc.) that provided a myriad of social services on behalf of the government. This approach separated the purchasing function from the providing arm of service delivery, but these arrangements remained characterised by full financial dependence of service providers on non-contestable block grants and subsidies, which were renewed on a regular basis. Again, there was to be neither market valuation, nor marginal analysis, nor competitive wages and salaries, or competitive forces in the provision of the services. Therefore, such arrangements provided no noticeable role for economic analysis. By contrast, contracting-out other types of services had by then become remarkably popular with all layers of public administration (particularly local government). Outsourced activities ranged from garbage collection and cleaning or maintenance services to defence procurement and airport management. Many corporate services and physical infrastructure acquisitions had also been outsourced from public administrations, especially in Anglo-American countries, not without a long suite of controversies (Dilger et al. 1997; Figgis and Grifith 1997; Quiggin 2002; Cannadi and Dollery 2005). Why did contested contracts and the use of markets failed to take a hold in social industries until very recently? There was, and still is, a presumption that social services “are different” (Wistow et al. 1996). On the one hand service recipients are more vulnerable than the average customer elsewhere, their needs stemming from acute situations of individual insecurity related to poverty, disability, old age, mental ailments, social isolation, etc. (Forder 1997). Service failure here often spells tragedy rather than mere consumer disappointment or dissatisfaction. On the other hand, there is traditionally no commercial equivalent to government provision in those areas, and so there are little supply alternatives to rely on to supplant direct public provision. Most of all, contracts in social industries are subject to considerable information management challenges, particularly defining, measuring and monitoring outcomes and quality of service (Fraser and Quiggin 1999). Contracts and markets in social services are therefore much riskier
222
B.P. Freyens
endeavours than elsewhere and require an idiosyncratic approach (Walsh 1995; Hart et al. 1997; Chalmers and Davis 2001; Levin and Tadelis 2007). If contracts, competition and markets are not suitable for delivering social services, what role does this leave for sector-specific economics in social industries? Somewhat surprisingly, quite a fair deal. The potential rewards, in terms of good governance, of working out a role for the mesoeconomics of thirdparty service delivery are potentially large, and quasi-markets (a toned-down version of the competition/market gospel) are now actively promoted in social industries by the public agencies of several countries. It is also well-worth emphasising that, alongside public services, the private sectors of all OECD nations have been delivering social services on a commercial basis for many decades in areas as diverse as old-age and nursing homes, childcares and kindergartens, private schools, therapy counselling, etc. There is therefore a long-standing and well-established tradition of private social enterprises operating in the social sector, mostly serving the high-price segment of the market, and often complementing rather than competing with the public segment. Mostly, the creation of social quasi-markets by enterprising governments does not compete for the markets of these “high-end” private, for-profit social enterprises. At times though, the latter do tender and compete with not-for-profits to join governmental quasimarket arrangement, when they appear commercially profitable.
Social Quasi-Markets The mesoeconomics of social industries only came into being with the microeconomic reforms, which started reshaping the supply and demand of social services in several OECD countries throughout the 1980s. In the USA, the UK, Australia, Canada, New Zealand, Sweden, the Netherlands and a few other countries, these reforms came partly as a necessity forced by post-oil-shock economic stagnation, and the accompanying growing influence of supply-side ideologies (Davis and Wood 1998). From the early 1990s onwards, purchaserprovider splits started taking a different shape in some social industries (Vertigan 1999), opening up new economic avenues through the use of contestable contracts, competition amongst service providers and increased emphasis on monitoring outcomes. In several countries, this new arrangement became known as “quasimarkets”. A quasi-market is a hybrid structure characterised by one principal (the funding agency, which may “act on behalf” of the Government but may not necessarily “be” the Government) and many competing private agents (for-profit and not-for-profit – NFP-service providers). The mix of for-profit and mission-based motivations is the first source of “quasiness” in these markets – a classic market would be assumed to be driven by profit and consumption gains alone. A second determinant of the hybrid nature of these markets is the switching function assumed by competition; providers compete against one another to win tendered contracts but may revert to
The Mesoeconomics of Social Industries
223
collaboration afterwards (until the next tender). Finally, quasi-market outcomes will be subject to much more regulatory scrutiny than classic markets (where such check and controls are usually restricted to the area of anti-trust/competitive pricing). Otherwise, quasi-markets keep the standard transactional attributes of standard markets; consensual exchange largely based on pricing, short-term relationships (Lane 2001), well-specified contractual requirements, and outcome assessment for future contractual readjustments (Newberry and Barnett 2001). Since the early 1990s, quasi-markets have literally mushroomed across social industries, mostly in Anglo-American countries. In Australia they are used since 1998 to competitively provide employment services through the Job Network (Considine 2001; Eardley 2003), and various disability services (Spall et al. 2005). A quasi-market for family relationship services is equally operating since 2005 (Butcher and Freyens 2010). In the USA, where social welfare services typically rest with State and local Governments, the 1996 Personal Responsibility and Work Opportunity reconciliation Act led to a large expansion of contested markets and contracting-out of welfare assistance (TANF) with private providers (Smith 1996; Mathematica Policy Research 2003). In Canada, as in New Zealand, quasi-markets are applied to community care services (Panet and Trebilcock 1998; Newberry and Barnett 2001). The UK operates quasi-markets for secondary school education (West and Pennell 2002; Bradley and Taylor 2010), and many community care services. Most of these countries have had long prior experience with the operations of health care quasi-markets (Powell 2003).
Trimming X-inefficiency? The main driving force behind the adoption of competitive tendering and marketbased mechanisms to provide social services is economists’ belief that competitive forces will increase efficiency, reduce costs and increase the quality of the service. A key concept here is the controversial2 theory of X-inefficiency (Leibenstein 1966). X-inefficiency emphasises the economic slack generated by bureaucratic insulation, the lack of motivation and incentives from management and staff not submitted to market sanctions of competitive forces, and the exercise of discretionary power to supply services in excessive quantity (Bailey 1995; Freyens 2008). Because it is insulated from competitive forces, direct public provision of services (or indirect provision through uncontested funding) creates administrative “fat-and-slack”, which only the introduction of market pressure would be able to trim. These claims are commonly made in the policy arena but are difficult to corroborate in practice. Governments who experiment with quasi-markets and contracting-out allege these arrangements deliver large cost savings while 2
The theory of X-inefficiency rests on incentive and information deficiencies. The theory has been very popular with management sciences but is viewed with scepticism by mainstream economists (Stigler 1976).
224
B.P. Freyens
improving substantially on service outcomes. Independent assessment of these claims is however difficult to make as the necessary data is only too rarely released in the public domain. In Australia, early evidence on the economic benefits of contracting-out social services is mixed (Industry Commission 1996; Ranald 1997; House of Representatives 1998) and more recent evidence is becoming difficult to obtain as “commercial in confidence” clauses increasingly keep the performance of quasi-markets away from external scrutiny. Nonetheless, various Australian empirical studies have estimated the cost reduction gains of contracting-out government services such as some defence procurement activities, garbage collection, school or hospital cleaning services, prisons, bus services, etc. both at local council, state and federal levels.3 Well-known estimates of cost savings range from an average 20–30% of service expenditures (Domberger and Hall 1996; Industry Commission 1996; Domberger et al. 2002), although these findings are disputed for failing to incorporate administration, monitoring and risk-related costs and a more realistic estimate for cost savings would probably be in the 6–12% ballpark (Hodge 1999). Productive efficiency gains (i.e. in terms of least cost production methods) are therefore significant, but do these gains represent real furthering of policy makers’ objectives, or do they come at the expense of some other measure of public interest? What if these savings are negated by compromises made on the quality of the service? Even if efficiency improvements result in markedly lower prices, service consumers are also likely to apply quality considerations when assessing the success of service delivery. There is by now a voluminous literature on these questions with much of the discussion proceeding by comparison of case studies.4 In the subsequent sections I formalise and narrow down this debate to pure economic terms (efficiency and effectiveness).
Quasi-Market Performance How do we evaluate the net gains achieved by introducing contestability and competition? To understand the underlying significance of this question requires a nuanced appraisal of all the trade-offs involved in contracting-out social services. In the following model, which significantly extends some original points made by Bailey (1995) and (Fisher 1998), I suggest a basic, fleshed-out approach to this question, highlighting the main issues of relevance for social sector mesoeconomics. First, I assume that a social service (e.g. job placements), which defines a social industry (e.g. employment services) is to be provided on a quasi-market by a certain number of service providers, all well-informed about the needs of their customers. 3
For a survey of the earlier literature, see (Domberger and Hall 1996) and (Rimmer 1998). A more recent assessment is (Jensen and Stonecash 2005). 4 See (Jensen and Stonecash 2005) for a recent literature review.
The Mesoeconomics of Social Industries
225
These customers (service recipients) therefore have a genuine interest in the service outcome pursued by the service provider.5 Let Q be the quantum of social services provided (number of job placements, number of case-mixes, number of divorcing couples counselled) which can only be achieved with a certain number of inputs (hours of nursing services, hours of professional counselling, etc.). As is well known (Journard et al. 2004), labour is by far the main input into the provision/production of social services, and output (quantum of services) therefore only depends on hours of work h performed at a certain wage rate w by professional staff employed by service providers. Service output (“the service”) is denoted by Q ¼ F(h). I also define the aggregate cost expenditure incurred by service providers as I ¼ w.h. In their sealed bids applications, individual providers submit their estimates of their share of this expenditure Ii ¼ wi.hi, which becomes their benchmark resource (funding) endowment if they win the tender. The precise level of these individual estimates will of course play an important part in determining who will acquire rights to provide the service in the quasi-market market.
Service Benefits Let B ¼ b(Q) stand for the “consumption” benefits generated by this quantum of social service. b(Q) will reflect potential service recipients’ willingness to pay for their social needs to be addressed, similar to a demand schedule. For instance service benefits may reflect the willingness to pay for reduced physical pain or improved life expectancy from health care, for a higher chance of averting family break-up through counselling, or for enjoying improved lifetime standards of living from successful job placement. Benefits from social services correspond to individual valuations of expected service outcomes. Intuitively, the higher the quantum of a specific social service provided to an individual, a household, or a community, the more “consumption” benefits are derived from the service. However, for many of these services, patient or customer benefits accrue at a decreasing rate and we have B ¼ b(Q), bQ > 0, bQQ < 0, 0 < b < 1. The benefit function can be viewed as the expression of a specific need by a community, a demand for services to meet some physical or psychological need. 5
There would of course be exception for some specific social services not submitted to standard scarcity constraints, i.e. services which do not respond to a need from recipients but from society as a whole. In particular, services of a mandatory nature may not correspond to the equivalent of a demand schedule from their recipients. Retraining structurally unemployed individuals to provide them with new skills may be part of a mutual obligations requirement, and such services mature slowly over the very long-term. Similarly, the training of inmates in corrective detention centers may not correspond to a need from inmates but from society, even though in the long-run the needs of both may be aligned. In these situations, the initial benefits from the first quantities of service may appear small at first and larger later after accumulation of a sufficient quantity of service.
226
B.P. Freyens
It is important to distinguish benefits from outcomes. First, benefits are entirely determined by individual preferences, whereas outcomes are not; the same outcome (e.g. a successful job placement) may generate different levels of benefits to different individuals. Second, benefits may accrue to individuals while the outcome remains unclear. An extra unit of service may generate benefits some recipients would have been willing to pay for (reflecting increments in hope, in confidence, in support, temporarily relief from pain, etc.) but it may make no difference to the final outcome (incapacity to work, divorce, long-term unemployment, death, etc.). As we indicate below, benefits will therefore behave in a more continuous way with service provision, whereas outcomes are more discrete (good, bad or intermediary) and more volatile (e.g. a successful job placement may be invalidated by a subsequent job loss a few months later). The assumption of decreasing marginal benefits from social services will typically reflect the preferences of customers in many social industries for a quick, rather than long or protracted outcome. For instance, the provision of employment services through job placement agencies will generate customer benefits as long as “the client” (the job-seeker) remains engaged, a likelihood that decreases as service provision drags on over time without reaching a positive outcome. Similarly, social counsellors providing family relationship services in an attempt to avert marital break-up (or in an attempt to re-focus attention on the interest of children) will often operate in a narrow window of time – the first few hours of the service will deliver the highest benefits. The theoretical implication is that for many services, the longer the duration or quantum of the services provided, the lesser their marginal contribution (marginal benefits) to the final outcome (which can be characterised by success or failure). I should also stress that the longer the duration of the service, the larger the risks it is exposed to (e.g. potential for a degradation of the provider/client relationship as customer’s patience wears thin, etc.).
Service Costs Now, let C stand for the costs of providing the service. In direct public provision, this cost is the direct cost from the taxation burden and indirect opportunity costs including all economic distortions created by the tax. In pure economic terms, if the service could be traded in a competitive market, C could then be viewed as a supply curve. In keeping with the law of supply, it would be logical to assume that the higher the quantum of services provided, the higher the incremental cost of funding the social industry, due to increasing usage of already scarce resources (government credit constraints, limited access to taxpayers’ dollars, skills shortages, etc.) and we have C ¼ c(Q), CQ > 0, 0 < c < 1. In a contracted-out setting, the nature of the cost to the government of funding the service changes little, except that the level of the cost would be expected to be lower. For simplicity I do not consider the case where the budget appropriation for
The Mesoeconomics of Social Industries
227
funding the quasi-market is open-ended (i.e. where the government does not specify an upper limit to expenditure – as in the “fee for service” model where the government meets the cost of service delivery ex-post, as is the case for veteran health care in Australia). This assumption is realistic; most quasi-markets are established within well-defined public budget constraints. We denote the funding The constraint sets a cap on the absolute quantum of constraint by C^ ¼ cðQÞ. services that can be provided through the quasi-market, although the exact number of services will depend on the cost-effectiveness of the system. Typically the government will select low-cost providers who still offer a credible, minimal level of standards for the service. Although there would be situations were the government will select a single provider from a large pool of applicants, I will restrict the analysis to the general case where the quasi-market consists of a network of n distinct providers.6 To clarify between the choices of a representative provider and those of the industry as a whole, I indexed individual provider variables by i. I assume that the public funds appropriated by the government agency for the operations of the quasi-market will be fully spent, and that each successful applicant receives a share li ∈ (0,1) of total funding with Si li ¼ 1, so that: cðQÞ, C^ ¼
n X i¼1
¼ li :cðQÞ
n X i¼1
wi :hi ¼
n X
Ii ¼ I
(1)
i¼1
Upon winning the tender, successful providers are endowed with an annualised lump-sum corresponding to the delivery costs specifications made in the bid (cost recovery, reasonable return on own resources used, depreciation of own assets used, etc.). For the private provider, the lump-sum allocation will typically just cover the costs, which we assume to be the wages paid to employees. Again, this is a simplification. Many providers of social services such as childcare, marital counselling, employment services, etc. are listed for-profit companies. For-profit providers will require a sufficient return on equity to their shareholders. However, these points of difference between the costs of for-profit and not-for-profit organisations will play no role in the discussion.
6 A social quasi market operated by a single service provider appears a unrealistic situation. However, although it would simplify the presentation it would eventually make no difference to my argument. Whether a single or a multitude of providers is selected, the competitive process is usually restricted to the tender only. After the tender, providers operate within the limits of their endowments and do not compete with one another (until the next tender). So in theory at least, it does not matter whether the quasi-market is operated by one or many providers.
228
B.P. Freyens
Service Outcomes Finally, let A ¼ f(.) stand for the outcomes generated by the service. Outcomes are notoriously difficult to measure and in practice are often approximated by a number of reportable outputs (e.g. number of therapies performed, number of job interviews achieved, number of operations performed, etc.). To make any sense of whether these outputs have helped achieve a positive outcome, service providers will design and conduct feedback surveys (e.g. exit interviews of the type “on a scale of 0–10, report how the therapy helped solve your problem?”). In some cases, service providers will develop a longitudinal monitoring system consisting of asking past customers at regular intervals about improvements in their circumstances (did they keep the job? their marriage? their health?). This information will often be required by the funding agency to enable it. A ¼ f(.) measures the quality of the service as perceived ex post by service recipients through the filling of the gap between their human wants and their expectations of remedies through the service. Improvements in the overall allocative efficiency of the industry can ultimately only be based on the outcomes, no the outputs achieved by service provision.7 As with cost and benefits, we can expect service quality to be related to the quantum of services performed, albeit in a probabilistic way (more services increasing the chance of a successful outcome). A can be viewed as a match function f(Q) with its domain limited by the funding resources supporting the service (e.g. 3 h of free counselling per week, 1 nursing visit a day, etc.) and its range defined over a finite support [0,1]. If the services have fully met and addressed the identified needs, the outcome is f(Q) ¼ 1. If the service fails completely to do so, there is no match between need and the quantum of service and f(Q) ¼ 0. Otherwise the outcome is mixed and denoted by f(Q) ¼ p ∈ (0,1), with fQ > 0. In other words, f(Q) reflects the accuracy of the match between service provision (ex post benefit) and the identified need (the expected benefit). This give us a measure of the quality of service (QoS) achieved by this additional unit of service.
7 In this model, I interpret outcomes as a measure of the matched response to the community’s demand b(Q) for the service. If the agenda of the government (e.g. curb obesity) differs from that of citizens (enjoy the consumption of salty high-carbohydrate food), then it can be credibly argued that there are two types of mutually exclusive outcomes; outcomes for society (O) and individual outcomes (Ai). I do not consider cases of divergence between public interest and citizens benefits in the model above. It is reasonable to argue that in most of these cases the public interest should override that of citizens when citizens’ behaviour is self-destructive or causes negative externalities. Hence, even if a service consisting of reducing access to junk food reduces citizens’ benefits from consumption, it should be clear that other arguments in their utility function will be improved (e.g. life expectancy, self-esteem etc.).
The Mesoeconomics of Social Industries
229
A Cost/Quality Decomposition The net contribution of the contracted-out service to the overall economic efficiency of the meso-economy considered (e.g. age care or employment services) is given by the benefit/cost ratio (BCR), i.e. the benefits created by the contracted-out services as a proportion of their provision costs b(Q)/c(Q). For the use of economic forces to improve social welfare we need this expression to deliver a value at least as high as the benefit-cost ratio prevailing prior to competitive tendering and creation of the quasi market. That is, there would be expected to be at least no net welfare losses from providing additional units of service through outsourced arrangements.8 Therefore, we assume as a starting point that the funding agency estimates the BCR to be higher under quasi-market provision than under direct delivery, so that the “make or buy” decision is “buy”, i.e.: BCRQM > BCRDD
(2)
In turn, the benefit-cost ratio can be further decomposed as the outcomes achieved per unit of budgeted cost, magnified by the ratio of benefits produced per outcome achieved: BCRQM ¼
f ðQÞ bðQÞ f ðQÞ cðQÞ
(3)
Expression (3) tells us that our overall efficiency measure of the economic gains from “quasi-marketing” service provision in a specific social industry will depend on cost-effectiveness, f(Q)/c(Q), and on the intensity of the benefits generated by the service as benchmarked against the service’s outcomes. Hence, the overall costeffectiveness f(Q)/c(Q) achieved by the service is the degree of response matching between service supply and community needs expressed per unit of funding cost (the resource drain to fund the response). Now, remember that f(Q) is a quality-of-service “matching” metric that tells us how close the service provided has met its objective (meeting individual needs). Without loss of generality, and remembering that “production” Q (i.e. service delivery) is a function F(.) of input quantity h we can rewrite expression (3) as: BCRQM ¼
h cðQÞ h FðhÞ
f ðQÞ bðQÞ Q f ðQÞ
(4)
Multiplying both sides of the equality by f(Q), introducing providers’ diversity (by indexing), using identity (1) between cost funding and cost expenditure, and 8 So far we assume the resource constraint is constant and not binding, otherwise an additional unit of service would have implications on the cost of funding.
230
B.P. Freyens
then dividing both sides of the expression by the benefits achieved by the relevant units of service yields: n f ðQÞ X hi Fi ðhÞ fi ðqi Þ ¼ I hi qi cðQÞ i¼1 i
(5)
Expression (5) tells us that aggregate cost effectiveness depends on some combination of service quantity and service quality factors, namely; • Input-expenditure efficiency h/I (quantum of inputs used per unit of cost expenditure – e.g. number of nurses employed per $1 million spent on nursing services). • Technical efficiency F(h)/h (number of nursing services provided per hour of nursing time – e.g. number of patients attended by a shift of nurses over a given shift’s working time). • Average standard of service quality f(Q)/Q, i.e. the extent – per unit of service provided – of the match between recipients’ expectations (of having their human needs addressed) and recipients actual satisfaction. For instance this could be the extent to which average nursing services not only delivered duty of care, but also listened to patients requests or complaints, spend enough time with patients, etc.
The Cost vs. Quality Conundrum Improving input economy, the first term of the right-hand side (RHS) in expression (5), is the most predictable outcome of adopting quasi-markets arrangements. Ceteris paribus, better input economy should increase cost-effectiveness, which is the left-hand side (LHS) of (5). In a contracted-out setting, achieving higher economy on input expenditure requires achieving more hours per unit of cost expenditure, i.e. improvements in input performance-cost ratio, which can only be achieved by a reduction of the going hourly wage rate, or other prevailing working conditions, or by replacing incumbent staff with lower qualified recruits. One would not require any profound knowledge of labour economics theory to understand the implications; either staff incentives to perform are going to be affected, or new staff will be hired with lower qualifications. In both cases, QoS would be expected to decline, not increase. Hence, the pursuit of cost effectiveness can be achieved by compromising on service quality and expression (5) carries no presumption that contracting-out of social services will necessarily improve the overall economic welfare of the social industry considered. If providers also seek to improve service quality, the third term of the RHS of (5), then again, ceteris paribus, cost effectiveness should increase. Cost-effectiveness is logically positively related to average quality because higher average quality improves the per-unit-of-cost match between supply and demand (better outcome). However, the ceteris paribus is a big ask here. Can increases in quality be achieved
The Mesoeconomics of Social Industries
231
while keeping delivery cost constant, or even further, by lowering costs? In other words, can contracts remove so much X-inefficiency that both quality and input economy can be made to improve? Even if the combined objective of quality improvements and cost savings was technically feasible, is it incentive-compatible? Are improvements in quality a credible goal of decisions to outsource social services? By the very definition of a competitive tender, rational successful applicants will trim their bottom line to provide the minimum standard of service specified by the contract (Quiggin 2002). What incentives could there be for providers to overachieve on quality – or even worry about the matter, when on top of competitiveness issues the typical lack of good quality measures will make any achievements debatable (Holmstrom and Milgrom 1991; Hart et al. 1997)? In all likelihood, QoS and incentive-compatibility will not be a priority of funding agencies either. Contracting-out social services is invariably a response to macroeconomic pressure on Governments budgets (to achieve real cost savings). It is too rarely a deliberate attempt to simultaneously raise the quality of the service as well – assuming such a move is at all possible. In practice, such “double dividend” improvements in cost effectiveness are usually achieved through technological advances [second term in RHS in (6)] not by savings on input purchase [first term of RHS in (6)] because the latter has deleterious effects on staff morale, incentives and qualifications. Thus the trade-off between cost-savings and quality improvements lies in that the latter cannot be achieved without somehow increasing the quantum/quality of inputs committed to service delivery, whilst the former creates severe distortions in the input market for the delivery of social services that will typically prevent any advances in quality.9 Although these problems also affect the delivery of non-social services, such as contracted-out corporate services, these issues are less severe there because there is a large, pre-existing basis of commercial providers.
A Provider Perspective To better understand the cost-quality trade-off, consider anew expression (5) and interpret it as representing the contractual requirements of a representative nongovernmental service provider. Cost-effectiveness, input efficiency, technical efficiency, and quality standards are now re-expressed as elements of a contract requiring that the QoS be at least upheld to pre-existing levels p0 (or an arbitrarily given standard p0) which can be approximated ex-post by a combination of output
9 There is by now a large body of studies corroborating the inverse relationship between QoS and cost savings. A recent survey is (Jensen and Stonecash 2005).
232
B.P. Freyens
measures and customer satisfaction surveys.10 Hence, the contractual requirements can be presented as: n:p0 hi Fðhi Þ p0 ¼ wi :hi hi qi li cðQÞ
(6)
The LHS of expression (7) sets the objective of the contract: achieving a match ^ The RHS represents ways the of at least p0 at cost no greater than (li/n)C. representative provider can use to achieve these targeted improvements in cost effectiveness, either by: • Improving the ratio of input usage per unit of cost funded, that is, reducing the input expenditure committed to service delivery, e.g. by lowering hourly wages, or reducing working conditions11 so that I1 I0 < 0. • Improving the productive efficiency of staff (the output/input ratio of labour), e.g. through adoption of better work technology, or by providing adequate training. • Improving the average standard of quality of the service (the quality of the match between supply and demand), e.g. allowing staff to dedicate more time to patients to better understand their needs. At the level of the individual provider, expression (6) tells us that economising on inputs must necessarily lower service quality. To see why, first rule out any eventuality of technological advances [the second term of the RHS in (6)], which may or may not be possible but are exogenous to my argument about the relationship between QoS and input management. Then the most expedient way to meet the contractual obligations to operate a social service at the prescribed level of funding consist of saving on input expenditure; freezing or lowering wages, removing employment protection, hiring casual staff, lowering recruitment standards, etc. Other alternatives would seem to be even less palatable. Attempts to raise QoS e.g. by providing more hours of service or by attracting better staff through better pay or better work conditions without simultaneously economising on input expenditure would necessarily affect the budget constraint in ways that would not be compatible with contract requirements. Attempts to raise average QoS by reducing the quantum of services provided (thus freeing resources for a better quality service to remaining customers) is only 10
So far we still abstract from transaction costs issues. Obviously, the problem of setting a predefined quality level in contractual print is that asymmetric information issues will generally prevent any consistent and unprejudiced assessment of whether quality is being delivered or not. Total quality management processes such as ISO9000 and others are not designed to address this problem as they are mere auditing tools evaluating the standards upheld in processes not in outcomes. 11 Note that reducing working time would not improve cost effectiveness as such reductions would cancel out in (5).
The Mesoeconomics of Social Industries
233
contract-compatible to the extent that the previously served but no-longer-served customers do not complain bitterly about their lot to the funding agency. If they do, as is all too likely, the assessment will be that any gain in quality was more than offset by these ignored human wants. On the other hand, attempts to both raise QoS and input economy would unavoidably be incentive-incompatible for staff and their unions. A possible compromise is to offer workplace and work-time flexibility or work-life balancing options to staff, which provide non-remuneration benefits to staff whilst not coming at a cost to the provider. There is ample evidence of this type of responses, particularly in the not-for-profit sector, but there is also a limit in terms of scope and productivity to what these arrangements can achieve. Flexibility is not a source of efficiency gains, instead it is a zero-sum game; more flexibility to the employee is less flexibility to the employer and vice versa (Quiggin 2002). Faced with such dilemmas, a rational provider will select the course of action that is most likely to be contract-compatible; reducing input expenditure (to achieve cost requirements) whilst accepting that this will certainly reduce QoS. Of course reducing QoS is not contract-compatible either but it will be much easier to dissimulate or argue about. As opposed to many corporate services (e.g. legal, IT or accounting services), the quality of social service is most of the time very difficult to ascertain, notwithstanding the use of output measures and exit surveys. The effects from some therapies may only accrue (and be observed) in the longterm, job placements may only be successful in the short-term, a correct diagnostic may still fail to resolve a health issue, etc. In addition it is also very difficult to define what is quality in many service settings (Chalmers and Davis 2001). Service recipients may attach more importance to the way they are being addressed than in the service per se, which they may not feel qualified to evaluate (e.g. a medical diagnostic). This fundamental trade-off between cost reduction and QoS is insufficiently buttressed in public announcements and does not blend well with Governments’ rhetoric about the self-financing benefits of contracting-out the provision of social services to private players.
Transaction Costs The previous discussion seemed to imply that social quasi markets are a desirable institution insofar as delivering more cost-effective services does not compromise on QoS (or at least that gains in cost-effectiveness more than match any loss of quality). This condition is a necessary but not sufficient condition for tendering quasi-markets to non-government operators. In practice, there will be other considerations weighing heavily in the decision to outsource. The most important one is the extent and nature of the transaction costs inherent to the principal/agent nature of quasi-markets operations.
234
B.P. Freyens
As soon as the planning/funding and delivery arms of the service are separated, new costs are created through the imperfect capacity of the funding agency to monitor the operations of the agent (the service provider) and ensure the terms of the contract are respected. This means that efficient contracts can be written that allows a sufficiently clear specification of the quantum of service to be provided, and the minimal standards of quality that the service need to adhere to, and there must be reasonable expectation that the performance of service delivery can later be assessed, that is, expression (5) must be more or less quantifiable by both sides of the contract. Accordingly, only services for which contracts can be easily specified and enforced should be subject to outsourced arrangements. For most social services, this is a big ask. However, even if the contract is efficiently written, risks may still emerge that providers will not act in the interest of their client (the funding agency) or in the interest of service recipients. Providers may be unlucky with their decisions or economic circumstances. Additionally, providers may turn out to be unreliable, incompetent, or trustworthy, or they may be reckless. In short, selected providers may be forced to wind down their operations or shut down the business prior to the expiration of the contract. If that is the case, what are the alternative acceptable sources of service supply left to the funding agency? These additional risks need to be assessed prior to contracting third parties to deliver services, and it should be obvious that such assessments would typically be fraught with rules of thumb and subjective judgements. All these consideration will play a major role in the decision to introduce competitive force in social industries previously ruled by government fiat. If service planners within a funding agency are reasonably confident that the transaction costs of contracting out the delivery of service to third parties are low, then this will favour the decision to use contracts and non-government providers. But there will always remain some degree of uncertainty in the system. Let us now integrate the effect of transaction costs in expressions (5) and (6). The first type of transaction costs identified above operates through reduced capacity to write contracts due to information problems – a moral hazard issue (a matter of imperfect monitoring technology). Moral hazard enters cost-effectiveness considerations by absorbing (through high administrative and monitoring costs) some of the funds budgeted for service delivery. In expression (7), I integrate the importance of these transaction costs relative to service funding by a parameter ’ ∈ [0,1] which reduces the overall quantum of funding allocated to the service. If ’ ¼ 0; moral hazard costs are so high that it is impossible to write any contract. If ’ ¼ 1; moral hazard plays no role and expression (5) prevails. Any intermediary value for ’ increases contractual cost-effectiveness requirements for given technology and QoS standard. The second type of transaction cost is the matter of higher volatility with respect to outcomes: matching services to human needs when service providers may eventually prove unreliable. This is an adverse selection problem, or in other words a screening problem – selecting the right type of provider. This problem can be added to expression (5) by increasing the spread of the outcomes function
The Mesoeconomics of Social Industries
235
through a somewhat stochastic matrix H to the n 1 vector of matching functions12: n f ðQÞ 1 X hi Fi ðhÞ Hfi ðqi Þ ¼ I ’ hi qi ’cðQÞ i¼1 i
(7)
Consequently, if there is much uncertainty as to the capacity of providers to deliver, a risk-averse bureaucracy (and there is no such thing as risk-loving ones) will require much larger gains in cost-effectiveness to compensate for any increase in the volatility of outcomes. It is fair to conclude that transaction costs will therefore play a major role in determining whether or not mesoeconomics are worth pursuing in the provision of social services. There is abundant evidence that even when this conclusion is positive, transaction costs still act as cogs in the wheels of many quasi-markets, preventing any wide-ranging adoption of these daring solutions. Public agencies’ “make or buy” decisions and their underlying transaction costs have been the subject of significant economic research, recently reviewed by (Levin and Tadelis 2007).
Conclusion The mesoeconomics of social industries boil down to evaluating the costs and benefits of introducing competition, contracts and contestability into a largely secluded public intervention domain. There are theoretically important potential rewards, in terms of good governance, of incorporating sound cost effectiveness concepts into the mechanisms through which social services funded by the public purse are delivered to their recipients. Undeniably, better economic performance in service delivery would help achieve a higher return on taxpayers’ dollars. Yet, applying these concepts to the delivery of social services through privately operated quasi-markets is always going to be a “last frontier” type of exercise. In a very basic analytical presentation, I isolated and defined the main contributors to cost effectiveness: managing input cost, improving labour productivity, and delivering quality of service. I showed that attempts to improve cost effectiveness through quasi-markets can take place by trading off quality of service for higher service to cost ratios. Since labour is the principal input of social industries, economising on the returns to labour will often translate in lower staff skills, more staff rotation, less time and dedication, less personalisation of the service – all of which could contribute to deteriorating the quality of service. Because quality of service is hard to define and observe by funding agencies, A situation of extreme volatility could be represented by adding a n n bi- stochastic matrix. Less stochastic (i.e. more deterministic) shapes for matrix A would thus reflect a lesser screening problem. 12
236
B.P. Freyens
such swaps between work conditions and quality can be commonly operated without overdue concern for contractual sanctions. It is logical to infer that from a social welfare perspective, such compromises represent redistribution shifts but they will not improve cost effectiveness or the overall efficiency of social industries. To a social planner, the net economic rewards from using quasi-markets in social industries may not be worth the effort. Customers perceptions about compromises made on service quality, the loss of economic conditions for social workers and the presence of large transaction costs would commonly invalidate many, if not all, of the potential economic benefits delivered by a competitive, market approach to service delivery. Unsurprisingly, despite a trend for more contestable access to public funding in many areas of government, the use of quasi-markets in the social sector remains relatively new and uncommon. Most social services are still ruled by government fiat and the mesoeconomics of the social services industry remains experimental territory for public sector economists. This article should leave the reader with little doubt as to why this is so and will remain so in the foreseeable future. Acknowledgements Some parts of this paper were presented at the 2008 Graduate Certificate in Policy Analysis at Griffith University, Brisbane, and benefited from the comments of several participants. I am grateful to Prof. John Wanna and to John Butcher for awakening my research interest in the economics of social policy. Any errors in the paper are my own and should not be attributed to these persons. An Australian Research Council linkage grant LP0562398 supports this research and is gratefully acknowledged.
References Andersson N (2003) The mesoeconomic analysis of the construction sector. Doctoral thesis, Division of Construction Management, Lund University Bailey SJ (1995) Public sector economics: theory, policy and practice. Macmillan, London Bradley S, Taylor J (2010) Diversity, choice and the quasi-market: an empirical analysis of secondary education policy in England. Oxf Bull Econ Stat 72(1):1–26 Brennan G, Buchanan JM (1980) The power to tax: analytical foundations of a fiscal constitution. Cambridge University Press, New York Buchanan JM, Musgrave RA (1999) Public finance and public choice: two contrasting visions of the state. MIT Press, Cambridge, MA Butcher JR, Freyens BP (2011) Competition and collaboration in the contracting of family relationships centres. Aust J Public Admin 70(1):15–33 Cannadi J, Dollery J (2005) An evaluation of private sector provision of public infrastructure in Australian local government. Aust J Public Admin 54(3):112–118 Chalkley M, Malcomson JM (1998) Government purchasing of health services. In: Culyer T, Newhouse J (eds) Handbook of health economics, vol 1. North Holland, Amsterdam Chalmers J, Davis G (2001) Rediscovering implementation: public sector contracting and human services. Aust J Public Admin 60(2):74–85 Considine M (2001) Enterprise governance: the limits and shortcomings of quasi-markets as a tool for social policy – the case of Australia’s employment services reforms. In: Edwards M, Langford J (eds) New players, partners and processes: a public sector without boundaries.
The Mesoeconomics of Social Industries
237
Center for Public Sector Studies, Victoria University/University of Canberra, Melbourne/ Canberra Davis G, Wood T (1998) Is there a future for contracting in the Australian public sector? Aust J Public Admin 57(4):85–97 Dilger RJ, Moffett RR, Struyk L (1997) Privatization of municipal services in America’s largest cities. Public Admin Rev 57(1):21–26 Domberger S, Hall C (1996) Contracting for public services: a review of antipodean experience. Public Admin 74(1):129–147 Domberger S, Jensen PH, Stonecash RE (2002) Examining the magnitude and sources of cost savings associated with outsourcing. Public Perform Manage Rev 26(2):148–168 Downs A (1998) Political theory and public choice. Edward Elgar, Cheltenham Eardley T (2003) Non-economic perspectives on the job network. Aust J Lab Econ 6(2):317–329 Figgis H, Grifith G (1997) Outsourcing in the Public Sector. 22/97, Parliamentary Library of New South Wales, Research Service Fisher CM (1998) Resource allocation in the public sector. Routledge, London Forder J (1997) Contracts and purchaser-provider relationships in community care. J Health Econ 16:517–542 Fraser L, Quiggin J (1999) Competitive tendering and service quality. Just Policy 17:53–57 Freyens BP (2008) Macro-, meso- and microeconomic considerations in the delivery of social services. Int J Soc Econ 35(11):823–845 Hart O, Shleifer A, Vishny RW (1997) The proper scope of government: theory and an application to prisons. Q J Econ 112(4):1127–1161 Hodge G (1999) Competitive tendering and contracting out: rhetoric or reality? Public Prod Manage Rev 22(4):455–469 Holmstrom B, Milgrom P (1991) Multitask principal-agent analyses: incentive contracts, asset ownership and job design. J Law Econ Organ 7:24–52 House of Representatives (1998) What price competition? A report on the competitive tendering of welfare service delivery. Commonwealth of Australia, Canberra Industry Commission (1996) Competitive tendering and contracting by public sector agencies. Australian Government Publishing Service, Melbourne Jensen PH, Stonecash RE (2005) Incentives and the efficiency of public sector outsourcing contracts. J Econ Surv 19:767–787 Journard I, Kongsrud PM, Nam Y-S, Price R (2004) Enhancing the effectiveness of public spending: experience in OECD countries. OECD Economics Department Working Papers No 380, OECD Publishing, OECD, Paris Laffont JJ, Tirole J (1993) A theory of incentives in procurement and regulation. MIT Press, Cambridge, MA Lane JE (2001) From long-term to short-term contracting. Public Admin 79(1):29–47 Leibenstein H (1966) Allocative efficiency vs. “X-efficiency”. Am Econ Rev 56(3):392–415 Levin JD, Tadelis S (2007) Contracting for government services: theory and evidence from U.S. cities. NBER Working Paper Nr. 13350, National Bureau for Economic Research, Cambridge Mathematica Policy Research (2003) Privatization in practice: case studies of contracting for TANF case management. March 2003, Mathematica Policy Research, Inc., Report to the Department of Health and Human Services, Washington, DC Newberry S, Barnett P (2001) Negotiating the network: the contracting experiences of community mental health agencies in New Zealand. Financ Account Manage 17(2):133–152 Niskanen WA (1971) Bureaucracy and representative government. Aldine Atherton, Chicago Panet PD, Trebilcock MJ (1998) Contracting-out social services. Can Public Admin – Administration Publique Du Canada 41(1):21–50 Powell M (2003) Quasi-markets in British health policy: a longue dure´e perspective. Soc Policy Admin 37(7):725–741 Quiggin J (2002) Contracting-out: promise and performance. Econ Labour Relations Rev 13 (1):88–204
238
B.P. Freyens
Ranald P (1997) The contracting commonwealth: serving citizens or customers? Public accountability, service quality and equity issues in the contracting and competitive tendering of government services. Public Sector Research Centre paper No 47, University of New South Wales, Sydney Rimmer S (1998) Competitive tendering and outsourcing – initiatives and methods. Aust J Public Admin 57(4):75–84 Smith SR (1996) Transforming public services: contracting for social and health services in the US. Public Admin 74(1):113–127 Spall P, McDonald C, Zetlin D (2005) Fixing the system? The experience of service users of the quasi-market in disability services in Australia. Health Soc Care Community 13(1):56–63 Stigler GJ (1976) The existence of X-efficiency. Am Econ Rev 66(1):213–216 Tullock G (1965) The politics of bureaucracy. Public Affairs Press, Washington, DC Tullock G (1987) Public choice. In: Eatwell J, Milgate M, Newman P (eds) The New Palgrave: a dictionary of economics. Macmillan, London Vertigan MJ (1999) Private provision of services and infrastructure – an expanding industry of the nineties. Aust J Public Admin 58(3):61–65 Walsh K (1995) Public services and market mechanisms, chapter 5 – contracts and competition. Macmillan, Basingstoke West A, Pennell H (2002) How new is new labour? The quasi-market and English schools 1997 to 2001. Br J Educ Stud 50(2):206–224 Wistow G, Knapp M, Hardy B, Forder J, Kendall J, Manning R (1996) Social care markets: progress and prospects. Open University Press, Buckingham
Governmental Discrimination Between Sectors: The Case of Australian Water Policy Lin Crase and Sue O’Keefe
Introduction This manuscript raises important questions about the way economists conceptualise the interaction between economic agents. It was noted earlier by H€andeler that “all branches of the economy are linked, each one is directly or indirectly a sales market for the other”. However, the thesis presented in his chapter is that aggregate measures of these interactions count for little. Similarly, Mann noted that the behaviour exhibited within sectors differs substantially and thus aggregating economic behaviours across sectors is likely to disguise important nuances. For instance, he observed that “in some sectors, operators tend to spend their money locally whilst in others people buy from more distant sources” with the consequence that “the sectoral structure of an economy will also determine the spatial face of the network through which added value is generated”. Most of these ideas are not especially contentious but the definition of “sectors” is far from precise and the usefulness of the concept in the context of public policy formulation is not always clear. This chapter represents a cautionary note on the argument for distinguishing sectors as the unit of economic debate, especially when the motivation for this approach is grounded in political interest in the preservation of rents. We present a case where the distinction of “sectors” has, in our view, proven unhelpful in improving welfare and promoting reforms that optimise the use of scarce resources. Unlike the other contributions in this volume, this chapter deals with an economic problem in Australia; namely the allocation of water resources between competing users. Nevertheless, there are useful lessons in this case and invariably these will resonate with European readers who are contemplating the role of sectoral analysis in economic debate.
L. Crase (*) • S. O’Keefe La Trobe University, Melbourne, VIC, Australia e-mail:
[email protected] S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9_9, # Springer-Verlag Berlin Heidelberg 2011
239
240
L. Crase and S. O’Keefe
In the context of this chapter we focus on irrigated agriculture as a sector in its own right. Like many industrialised nations, the economic significance of agriculture in Australia no longer matches its social and political influence. Australia is amongst the most urbanised nations on earth and yet much of the electorate would find it difficult to conceptualise an economy where agriculture was not portrayed as being important. One of the consequences of this lingering affinity with yeomanry has been the development of a policy approach which indirectly promotes the agricultural sector often at the expense of urban-based sectors or environmental interests. This is particularly evident in the context of water, where reallocating the resource away from agriculture has proven especially difficult. What makes these events even more striking is the fact that Australia is usually described as the driest inhabited continent on earth – a place where water needs to be used for the highest possible return. This chapter considers the difficulty of these policy choices and their impacts on the overall performance of the economy. We also trace the ramifications for the management of the natural resource base. In addition to considering recent policy episodes and their weaknesses, observations are also drawn about preferred adjustment mechanisms and why the distinction of “sectors” needs to be treated cautiously in policy formulation. The chapter itself is divided into five main parts. In the following section we briefly outline the physical distribution of water resources and contrast this with the policy approaches that dominated water management until the last few decades. The details of recent policy endeavours to promote greater interaction between sectors that call on water resources is described in section “Inter-sectoral Water Trade”. An analysis of the barriers to this approach is offered in section “Constraints to Inter-sectoral Market Interaction” before preferred policy approaches and broader lessons are presented in section “Lessons for Sectoral Analysis and Concluding Remarks”.
Water and its Historical Deployment in Australia Water is an inherently variable resource in Australia, much more so than many other places on Earth. This variation occurs both temporally and spatially. For instance, the north-east portion of the continent is characterised by tropical condition, where rainfall commonly exceeds 2,400 mm per year, while the inland usually receives less than 200 mm per year. This variability is also accompanied by high evapo-transpiration which collectively give rise to very low rates of runoff compared to other continents. Thus, while the Australian continent accounts for a little over 5% of the global land mass it generates only 1% of total runoff (Letcher and Powell 2008). An important and relatively obvious spinoff of these physical dimensions of water is that sectors seeking access to large quantities of reliable water resources will need to build larger and thus more expensive storages than in most other places. For example, Smith (1998) estimates that an Australian water reservoir can only
Governmental Discrimination Between Sectors
241
achieve the same level of reliability as those in Europe if it is six times larger, thus allowing for the greater fluctuation in inflows without interrupting supply. In economic terms, this clearly places those sectors that rely on ample water at a significant cost disadvantage relative to international producers. Notwithstanding the significance of such hydrological and economic rudiments, Australian governments have actively promoted the development of an irrigated agriculture sector for over a century. This is particularly evident in the area known as the Murray–Darling Basin where substantial infrastructure investments were undertaken to harness water for irrigation, especially in the latter parts of the nineteenth century and most of the twentieth century. The Murray–Darling Basin occupies about a seventh of the Australian land mass and spans several states and territories. It is Australia’s most productive agricultural area and the majority of the irrigated agriculture sector (85%) is located in the basin. The two main river systems in the basin are the Darling and the Murray. The former has its headwater in the north flowing south to a confluence with the Murray. The Murray’s headwaters is in the south-east of the basin with water flowing slowly west with a modest gradient. On average, the basin generates around 13,000 GL of flow per year with most coming from the Murray system and its tributaries (Department of the Environment, Water, Heritage and the Arts 2009). Until the twentieth century Australia comprised separate British colonies. The two largest colonies, by population, were New South Wales and Victoria. Each sought to harness water resources to develop an irrigation sector, although South Australia, a downstream colony, initially had a preference for using the rivers for navigation. Ultimately, a deal was struck in the form of the Murray–Darling Basin Agreement that saw most of the average available water resource appropriated by the larger upstream states with a guarantee of minimum flows to South Australia. The signing of the Agreement occurred in 1914 by which time Australia had assumed the status of an independent nation and the colonies ratified as states. Rivalries between the states, a strong desire to construct infrastructure to show the dominance of man over nature and the determination to build a nation all conspired to play a role in the growth of irrigation. Communal irrigation schemes were developed to settle soldiers returning from World Wars. Dams were financed by the public purse and the ambition of greening the desert with the help of a noble yeomanry remained the uncontested mantra until the 1960s.
Inter-Sectoral Water Trade Davidson (1969) was at the forefront of criticism of the expanding irrigation sector in Australia during the 1960s. Ironically, his arguments were not that dissimilar to those presented by colonial Treasury officials in the late nineteenth century where the ability for irrigation to pay its way was seriously questioned. Davidson (1969) articulated the principles for a successful agricultural sector in Australia – namely, utilising large tracts of arable and cheap land, low levels of labour and the
242
L. Crase and S. O’Keefe
production of a durable commodity that could withstand export. An irrigation sector was the antithesis of this and was never likely to withstand the pressures of global market competition (Musgrave 2008). The upshot was that the irrigation sector was never likely to be able to stand on its own feet and would require continued support from the public purse. Enthusiasm for fiscal conservatism and greater attention to international competitiveness became popular policy approaches in Australia in the 1980s, just as occurred in many other western economics. In the case of irrigation, this was accompanied by mounting evidence of serious environmental harm caused by excessive extraction of water from inland rivers and streams. By this time most of the flows in the southern Murray–Darling system had been seasonally inverted to suit irrigation demand and not the indigenous species. In addition, around 11,000 of the 13,000 GL of flow was extracted with the sanction of the state, with around 95% consumed by irrigation. Moreover, the projected growth in extractions seemed destined to exceed natural inflows undermining economic and environmental outputs from the basin. In the context of competing sectoral interest in water, two major policy initiatives came at this time. First, a limit or “cap” was placed on extractions for the rivers of the Murray–Darling Basin. A series of reforms were also agreed that were premised on treating water as an economic good subject to competition from different sectors. In simple terms a market approach was, prima facie, to be used to allocate water between competing sectors. Water access rights were to be separated from land and the imposts of water use were to be priced such that delivery and other costs were recovered by the agents providing those services. Trade in allocations and access rights were to be encouraged, such that competing interests in water would purchase rights to do so. Thus, new sectoral interests in water were to be required to purchase water from existing users rather than rely on government largesse to appropriate more water from an already over-allocated resource pool. In addition, the historic approach of unquestionably using public funds to subsidise infrastructure for the irrigation sector was to be abandoned.
Constraints to Inter-Sectoral Market Interaction This “economic good” approach to water allocation attracted strong praise from international observers of water allocation decisions. For instance, The Economist proclaimed in 2003 that Australia should be “the country that takes top prize for sensible water management”. However, the reality of policy decisions and how this translates into the distribution of sectoral advantage and rents is markedly different from the neoclassical ideal.1 At the level of policy application there have been at least three reasons why the inter-sectoral reallocation of water resources has proven problematic, in the conventional neoclassical economic sense at least. 1 This is not to contend that the original distribution of rights (and thus rents) counts for little in politico-economic terms, as implied by some interpretations of Coase.
Governmental Discrimination Between Sectors
243
Infrastructure Advantage First, in an effort to reduce the overall impact on the most prominent extractive user of water, namely irrigation, governments have been keen to explore mechanisms for mitigating the impacts of markets. This is not entirely surprising given the historical prominence of irrigation as a vehicle for achieving social policy ambitions and the overriding affinity for agriculture in the Australian milieu. As noted earlier, and in line with earlier observations by Mann in this manuscript, there are also significant spatial implications of policy in some domains. Irrigation is invariably seen as a rural activity that brings benefits, in terms of increased income and wealth, to communities that are spatially separated from the major urban centres in Australia. Accordingly, government sponsorship of a particular sector can be presented as a policy seeking to alleviate wider pressures on communities. It is important to note that the entire agricultural sector in the Murray–Darling Basin employs less than 100,000 people and irrigation accounts for around a third of the total value of agriculture. Agriculture is also a minor contributor to national income but provides non-trivial exports (see, for example, ABS 2008). In the context of a manuscript that seeks to explore the role of sectors, the political and cultural clout of this sector far outweighs its economic might. As evidenced by recent experiences with a drought of unprecedented severity, the contraction of water allocations to irrigated agriculture can be expected to have minor impacts on the national economy and even regional economies. And yet Australians retain a strong association with agriculture, albeit often informed by outdated notions of food security, environmental quality and cultural sovereignty. More specifically, many urban Australians would be surprised to learn that the majority of agricultural production in the Murray–Darling Basin is exported for profit. This has provided fertile ground for irrigation sector lobbyists to argue that a redistribution of water away from irrigated agriculture will invariably bring forward domestic food shortages in urban locales or calamitous scarcity on world markets. In practice, Australia is a food producing nation located a considerable distance from many paying markets. The consequences for world food production, should water be allocated away from Australian irrigation, are modest, at most. In an effort to respond to the claims of irrigation, governments at state and national level have embarked on a major public infrastructure program in irrigation. This is a deceptive policy insomuch as it has been disguised as a vehicle for delivering broader environmental outcomes and yet is decidedly at odds with the policy framework that purportedly promotes greater inter-sectoral competition for water resources. The argument that has been conventionally employed in this context is that public investments in irrigation infrastructure can “save” vast sums of water and make for more technically efficient agricultural production in irrigated agriculture (see, for instance, Howard 2007). The water that is purportedly “saved” can then be redeployed to satisfy other competing sectors without the need to invoke a market transaction that bids the resource away for irrigation. Regrettably, this is contrary to most international evidence on infrastructure of this form
244
L. Crase and S. O’Keefe
(see, Perry 2009 for example), especially when scrutinised at a basin-wide scale that matches policy ambition. Water infrastructure that supposedly makes the irrigation sector more efficient does little else than replicate the water policy mistakes of the past century in Australia. The water that is technically saved in a fully allocate river system, like the Murray–Darling, is not saved at all. In a physical sense, such infrastructure simply redistributes water in time and space, since water that was previously “lost” has already been allocated to downstream users (Crase and O’Keefe 2009). But this is arguably not the most severe distortion to the reallocation process and what it means for sectoral interaction. Subsidising infrastructure for the irrigation sector raises the marginal revenue product in that sector, or at least in specific locations. This invariably increases the bid price for water in those districts favoured by government largesse. In simple terms farmers in these districts now bid water away from other users and simultaneously undermine the reliability of water rights outside those districts. This would not be a significant public policy problem if Australian water bureaucracies had the capacity to “pick long term winners”, but the historical evidence on this front is hardly encouraging.2
Barriers to Trade As noted earlier, one of the water policy reforms that attracted international praise was the decision to establish market transfer of water between competing interests. The rationale for this approach is well-established in the economics literature – those who value the resource most will bid it away from those who value it least, thereby inducing a welfare gain for all. We also noted earlier that the initial allocation of water in Australia was heavily biased in favour of agriculture, in large part because of the accompanying social ambitions of government. Establishing a market for water was to have provided a vehicle for other sectors to make a legitimate claim on water access. For example, growing urban communities and expanding industrial users could access water by simply purchasing access rights from agriculturalists. The theoretical gains from such trades were considerable, given the divergence between the returns on water in irrigated agriculture and the value added by water in either urban or industrial uses. For instance, annual water allocations in Victoria in 2010 were commonly traded between irrigators for about $A0.06 per kilolitre. Potable water in urban communities in the same region retails for around $A1.00 per kilolitre.3 2 Temptations to make global predictions about the role of government specifically in irrigation and in agriculture generally need to be resisted at this point. In economies like Australia with well developed market institutions the case for government intrusion in farmer decisions is likely to be weak. Similar conditions cannot be assumed of all nations. 3 At the time of writing the Australian Dollar ($A) was equivalent to about $US 0.9.
Governmental Discrimination Between Sectors
245
And yet relatively few large trades of this form have occurred and restrictions on water use in urban areas are common place in Australian cities, even those with straight forward hydrological connections to the irrigation sector. For instance, Canberra, the national capital, has consistently required its citizens to limit outdoor water use, regardless of householders’ willingness to pay to retain local environmental amenity (see, for example, Centre for International Economics 2008) and regardless of the city’s location on a major tributary of the Murray (the Murrumbidgee River) where most water is consumed by downstream irrigators. Moreover, some citizens take pride in informing on their neighbours should they transgress from stringent behavioural constraints on their water usage (see, for example, Cooper 2009). Given the magnitude of the costs of restrictions on water consumption in urban areas (see, for example, Grafton and Ward 2008) the reluctance of the urban water sector to engage in trade with irrigation remains perplexing. In this context there are several “non-economic” barriers to trade that warrant mention, along with other more subtle economic constraints. Amongst the former of these is the perception amongst the polity that any adjustment to irrigated agriculture would have unacceptable political and social costs. Supporting this view is a range of reports (see for example, RMCG 2009) that point to the demise of many regional and rural communities if water is reallocated elsewhere. Regrettably, much of this form of analysis is either carelessly developed or deliberately deceptive. For instance, the presumption is that those in irrigated agriculture who would willingly sell water would receive nothing in return and have no alternative productive activities to which they could devote their remaining resources. The folly of this approach is well documented by the Productivity Commission (2010) but the political influence of such work has been substantial. Coupled with spurious arguments about food security there are strong political forces that constrain the trade of water away from the irrigation sector to urban users. There are also important feedback effects between these sectoral political considerations and the infrastructure issues described above. Illustrative of this case is the euphemistically named Northern Foodbowl Modernisation Project located in Victoria. The background to this project is the extensive and ongoing drought in Victoria. Faced with dwindling water supplies for the largest city (Melbourne) the state government sought to divert water from the northern irrigation districts. The hydrology is such that Melbourne is not naturally connected to these districts, and a pipeline has been constructed to facilitate the water transfer. Sensing that such a project would attract criticism, the state government chose not to purchase water from the irrigation sector but to provide a $A1 Billion subsidy to purported enhance irrigation efficiency. This was claimed to lead to water “savings” that could then be diverted to city dwellers. The difficulties with this approach have been well documented (see, for example, Crase and O’Keefe 2009) and include the fictitious nature of water savings, the exorbitant costs and the narrow distribution of benefits. In effect, a select group of irrigators has benefited substantially from this approach at the expense of urban water users. In addition, the irrigation sector has been shown to be not a homogenous sector at all – rather some irrigators have discovered that
246
L. Crase and S. O’Keefe
rents can be assigned to a small group within the sector with infrastructure bestowed on those in the most favourable geographic (and sometimes political) location.4 It is also worth noting that the urban water sector in Australia remains heavily dominated by government. In some jurisdictions the state government employs a corporatisation model for service delivery whilst in other jurisdictions local government plays a direct role. In either case the influence of political considerations on water trade manifests in urban water customers having less access to water than would likely be the case under different institutional arrangements. In a similar vein there is a suite of more explicit economic constraints to trade. As noted earlier, a component of the heralded water reforms in Australia was the decision to make water delivery and service charges reflective of cost. Translating this policy ambition into prices has not been straightforward. One consequence is that the irrigation sector faces a different set of charges to other users in many states. For instance, in Victoria urban water users pay a 5% levy to cover the “environmental costs” associated with water delivery whilst the rate is set at half of this for the irrigation sector. Similarly, access charges were suspended for irrigators in Victoria as a result of the ongoing drought. No similar dispensation was offered to other sectors. In simple terms these types of anomalies erode the benefits of trade between sectors.
Reluctance to Have Environment Sector Interests Calibrated by a Competitive Metric To date we have primarily focussed on the difficulties of moving water resources between the irrigation sector and the urban water sector. As noted earlier, one of the motivations for water reform was the extant environmental degradation that had resulted from excessive extractions, especially by irrigation. Here we exploit the difficulty of defining a “sector” by arguing that environmental water interests might also be treated as a sector competing for water access. Initially, two major policy approaches were used to appropriate more water to meet environmental demands. The first was to require jurisdictions to recognise the legitimacy of environmental claims on water. This required in the formulation of local and regional water planning documents that notionally allocated water to environmental purposes. The extended drought in the southern Murray–Darling basin has resulted in the suspension of many of these plans with extractive users given higher priority than environmental interests. Put simply, the fact that these plans were suspended is testament to the genuine weight put on the environmental sector and the calibre of planning that was embodied in the plans. 4 This is perhaps illustrative of claims made elsewhere in this manuscript that defining a “sector” is no easy task.
Governmental Discrimination Between Sectors
247
The second immediate policy response, as noted above, was to deploy government funds to subsidise irrigation infrastructure with the aim of “saving water” that could then be apportioned to the environmental sector. The limitations of this approach have already been dealt with. More recently, it has become apparent that relatively little water has been accrued from such projects and governments have been actively purchasing water access rights from willing sellers. In addition, this activity is anticipated to expand with the scheduled announcement of a basin-wide plan which is widely tipped to further limit extractive claims. The political resistance to this approach has been described but it is worth noting that there are other interests that stifle market trade to the environmental sector. The environmental interests in water are usually conceptualised as public goods. Moreover, for many, the provision of public goods is seen as a legitimate role of government. Like many jurisdictions, water resources in Australia are vested in the state, including groundwater. Thus, water access rights for irrigators and others exist in a hierarchy of rights and are subordinate to the state’s interests. In this setting some find it untenable that the state would purchase water to deliver a public good insomuch as the water is “owned” by the state in the first instance. Clearly, there are sound public policy (and political) reasons why purchasing rights is preferable to acquiring water by legislative fiat. However, these arguments do not resonate with all voters, some of whom place environmental outputs in a category beyond reproach. The irony of this is that a reluctance to quantify the value of outputs produced by the environmental sector provides a void in which rent seeking can thrive and in which trade in the interest of the environmental sector can be stifled.
Lessons for Sectoral Analysis and Concluding Remarks We have argued in this chapter that sectors are treated very differently by public policy makers, especially when it comes to water allocation and water market access in Australia. In that context, thinking about the influence of public policy at a sectoral level provides useful insights into political economy. However, this falls short of arguing that sectoral performance and sectoral advantage should be the standard approach of economic analysis. If anything, the discussion in this chapter points to the need for caution on this front. In our view developing a policy that focuses too heavily on sectoral impacts is likely to reduce overall economic welfare. First, large sums of public money have been expended with the specific task of rebalancing the extraction of water from inland rivers without impacting too severely on the irrigation sector. By any sensible measure this has not happened. In their comprehensive review of water policy in the Murray–Darling Lee and Ancev (2009, p. 17) observe that “there is limited empirical evidence of real improvement”. They further note that over the past 17 years governments have expended about $A25 billion on correcting the ills
248
L. Crase and S. O’Keefe
of the basin but “this has not produced the envisaged restoration of over-allocated river systems” (Lee and Ancev 2009, p. 17). Second, there is a non-trivial opportunity cost to these expenditures. In this regard proponents of publicly subsidised irrigation infrastructure appear ignorant of this most rudimentary principle of public finance (see, for example, RMCG 2009). There is a propensity to attempt to justify projects that fail to deliver their intended outcomes on the basis that some wider economic benefit is bestowed by the work, say in the form of regional employment. More worrying is the trend to then use this argument as the basis for even more public expenditure. The “business case” required to allow approval of stage two of the Foodbowl Modernisation Project in Victoria is a case in point. Unable to rationalise the government expenditure on the basis of first round benefits and costs,5 the developers of the business case opted for measuring the employment impacts of the installation of infrastructure. To accomplish this task a computable general equilibrium model was employed. Regrettably and unbeknown to those developing the model, an additional sum was “inexplicably” added to the benefits to show a positive outcome (Crase 2010). Stage two of the project is now well advanced and will cost taxpayers an additional $A1 billion. Had a more robust ex ante analysis of public investment decision been undertaken, a different outcome may have emerged. Third, there are serious long term consequences for sectors that attend some of the policies invoked to date. It is now widely accepted that climate change is likely to reduce water availability in the Murray–Darling Basin. Against this background there is strong support for approaches that encourage sectors to build adaptive capacity. It is difficult to reconcile this approach with the policies that limit trade of water between sectors. If anything, such trades offer a means of adapting to scarcity and uncertainty. For instance, cleverly designed market instruments like options contracts have been shown to assist sectoral adaptation in this environment. In sum, water policy formulation in Australia provides a useful case analysis of how sectoral influences can shape (and mis-shape) public policy. Using the sector of the basis of analysis provides insights on political economy. However, developing policy solely with the ambition of limiting impacts on sectors can give rise to policies that limit economic welfare.
References ABS (2008) Water and the Murray–Darling basin – a statistical profile, 2000–01 to 2005–06. Australian Bureau of Statistics, Canberra Cooper B (2009) The social cost of urban water restrictions and related enforcement regimes. AARES (NZ branch) conference, Auckland
5 There were serious flaws with even the initial first round estimates (see Productivity Commission 2010).
Governmental Discrimination Between Sectors
249
Crase L (2010) The spin and economics of irrigation infrastructure policy in Australia. Aust Q 82(2):12–20 Crase L, O’Keefe S (2009) The paradox of national water savings. Agenda 16(1):45–60 Centre for International Economics (2008) Updated estimates of the cost of water restrictions in the ACT region. Centre for International Economics, Canberra, Prepared for ACTEW Corporation, September Davidson B (1969) Australia wet or dry? The physical and economic limits to the expansion of agriculture. Melbourne University Press, Melbourne Department of the Environment, Heritage, Water and the Arts (2009) Water policy and programs. http://www.environment.gov.au/water/policy-programs/index.html. Retrieved 1 Mar 2009 Grafton Q, Ward M (2008) Prices versus rationing: Marshallian surplus and mandatory water restrictions. Econ Rec 84:s57–s65 Howard J (2007) A national plan for water security. Department of the Prime Minister, Canberra Lee L, Ancev T (2009) Two decades of Murray–Darling water management: a river of funding, a trickle of achievement. Agenda 16(1):5–23 Letcher R, Powell S (2008) The hydrological setting. In: Crase L (ed) Water policy in Australia: the impact of change and uncertainty. Resources for the future, Washington, DC, pp 17–27 Musgrave W (2008) Historical development of water resources in Australia: irrigation policy in the Murray–Darling basin. In: Crase L (ed) Water policy in Australia: the impact of change and uncertainty. Resources for the future, Washington, DC, pp 28–43 Perry C (2009) Pricing savings, valuing losses and measuring costs: do we really know how to talk about improved water management? In: Albiac J, Dinar A (eds) The management of water quality and irrigation technologies. Earthscan, London Productivity Commission (2010) Market mechanisms for recovering water in the Murray–Darling basin: final research report. Productivity Commission, Melbourne RMCG (2009) Socio-economic impacts: closure of Wakool irrigation district (or parts thereof). http://wakool.local-e.nsw.gov.au/files/353401/File/SocioEconomicImpactsWakoolShire09. pdf. Retrieved 19 Dec 2009 Smith D (1998) Water in Australia: resources and management. Oxford University Press, Melbourne The Economist (2003) Survey: liquid assets. Economist 368:13
.
About the Authors
Lin Crase is Professor of Applied Economics and Executive Director of the Albury-Wodonga campus at La Trobe University. He has spent the past 15 years examining water policy and its implications for resource management. He has written and edited three books and published over 60 scholarly journal papers in this area. His work has covered the difficult public policy dilemmas associated with water allocation in Australia and India. Kurt Dopfer, Professor emeritus of Economics and Co-Director of Institute of Economics at St.Gallen University, Switzerland, visiting professor at various universities and academic institutions in Vienna, Tokyo, Brisbane, Dresden etc. has written and (co)edited several books in ten languages, numerous articles in academic journals, handbooks and lexica, on evolutionary and institutional economics, complexity and methodological issues. Wolfram Elsner is Professor of Economics in the Institute for Institutional and Innovation Economics-iino, Faculty of Economics and Business Studies, University of Bremen, Germany. He worked as Director of the Planning Division of the Ministry of Economic Affairs of the State of Bremen, among others, from 1986 to 1995. He is Adjunct Professor in the Doctoral Faculty of the University of Missouri-Kansas City, and has been on the editorial boards of several international journals, on the boards and councils, and on committees of international economic associations. Erik Ha€ndeler has studied economics and has then become a science journalist. He actively promotes the concept of Kondratieff cycles in the economy. His book “Die Geschichte der Zukunft” (The history of the future) has become a bestseller and he is a very coveted speaker at business events. Torsten Heinrich is a teaching and research assistant, and doctoral student at the University of Bremen. He studied in Dresden and Madrid and graduated in economics at the Dresden University of Technology. His research interests include the S. Mann (ed.), Sectors Matter!, DOI 10.1007/978-3-642-18126-9, # Springer-Verlag Berlin Heidelberg 2011
251
252
About the Authors
economics of growth and technological change, non-equilibrium theories, computer simulation and the economics of digital goods. Benoıˆt Pierre Freyens, Ph.D. (Economics), is a Senior Lecturer at the University of Canberra. He has previously worked for the European Statistical Office, and at Deakin University, University of New South Wales, the Australian National University and University of Wollongong. He is the author of numerous journal articles in socio-economic policy, labour economics and radio spectrum policy. Stefan Mann heads the Socioeconomics research group of the Swiss Federal Research Station Agroscope. He holds one Ph.D. in Agriculture and one in Economics. After collecting some work experience in the German Ministry of Agriculture, his research about agricultural policy brought him repeatedly to question the normative and analytic foundations of economic science which has led to a number of papers in heterodox and socioeconomic journals. Shuko Miyakawa, MA from the University of Michigan, is a research assistant at the Research Institute of Economy, Trade & Industry, IAA in Tokyo. Suzanne O’Keefe is Associate Professor and Associate Head of the Regional School of Business at the Albury-Wodonga Campus. Her research interests are in the area of applied economics and more specifically of late, the field of water policy. Andreas Pyka holds a Ph.D. in economics from Augsburg University, has been a professor at Bremen University since 2006 and has recently accepted a chair for innovation economics at Hohenheim University. Pier Paolo Saviotti is a native of Italy. He received a Laurea in Chemistry from the University of Perugia (1968), a Ph.D. in Physical Science and is now a professor of at the University of Grenoble. Hiroshi Yoshikawa, Ph.D. from Yale University, is a professor of economics at the University of Tokyo. His recent writings include Reconstructing Macroeconomics: A Perspective from Statistical Physics and Combinatorial Stochastic Processes, Cambridge University Press, 2007 (with Masanao Aoki).