Information Asymmetries and the Creation of Economic Value
This page intentionally left blank
Information Asymmetries and the Creation of Economic Value A Theory of Market and Industry dynamics
Joop A. Roels
IOS Press
© 2010 The author and IOS Press. All rights reserved ISBN 978-1-60750-478-8 (print) ISBN 978-1-60750-479-5 (online)
Published by IOS Press under the imprint Delft University Press IOS Press BV Nieuwe Hemweg 6b 1013 BG Amsterdam The Netherlands Tel: +31-20-688 3355 Fax: +31-20-687 0019 email:
[email protected] www.iospress.nl
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information PRINTED IN THE NETHERLANDS
Preface The writing of this book takes place in the year in which we celebrate the 150th anniversary of 'DUZLQ¶VWKHRU\RIHYROXWLRQ7KLVWKHRU\UHYROXWLRQL]HGWKHXQGHUVWDQGLQJRIHYROXWLRQDVD key principle underlying the development of biological species. It largely displaced the creationist view on the origin of life. Some 50 years earlier Sadi Carnot published a remarkable paper that made him one of the founders of thermodynamics; the science that DOORZV XQGHUVWDQGLQJ RI WUDQVIRUPDWLRQV RI HQHUJ\ DQG PDWWHU &DUQRW¶V ZRUN RQ WKH KHDW engine led to the discovery of the second law of thermodynamics. The second law of thermodynamics provides an arrow of time to the direction in which spontaneous processes evolve. It has been formulated in a variety of ways in the years after its discovery by Carnot. One of the popular formulations is that systems that are left alone develop in the direction of increasing disorder. Buildings that are left alone develop into ruins; the reverse process requires the input of solid and skilled labor. The combination of the theory of evolution and the second law of thermodynamics has puzzled many early investigators. Evolution clearly proceeds in the direction of increasing complexity, ordered systems, such as human kind, evolved apparently spontaneously out of an initially unordered state. Fortunately, developments in the 20th century led to a reconciliation of thermodynamics and evolution. Prigogine and his coworkers formulated what could be called a general systems theory of evolution. It became clear that the evolXWLRQRI³2UGHURXW RI &KDRV´ LV D QHFHVVDU\ FRQVHTXHQFH RI WKH VHFRQG ODZ RI WKHUPRG\QDPLFV LI ZH FRQVLGHU complex systems that operate in an environment that is not in thermodynamic equilibrium. The discovery of DNA and RNA as the basis of life led to the understanding that biological evolution is of an informational nature. In biology the storage and processing of information forms the basis of the evolution of increasingly complex organism. Evolution in biology was for an extensive period of the history of life on earth based on the further refinement of the immortal coils that characterize the double helix of DNA macromolecules. Later on in evolution other ways of developing and transferring information emerged when the brain appeared and evolved to sophistication when the humanoids and later on Homo sapiens appeared on the stage. This triggered the so-called exogenic evolution; evolution based on transferring and developing information beyond the information carrier DNA. This led to the development of the socioeconomic system, with its institutions such as universities, economies, markets and firms. It is the ambition of this book to investigate the relation between the theories mentioned above and the storage, processing and transfer of information to grasp the dynamics of economies, markets and industries. Most of the systems that are of interest to physicists, chemists and biologist are far too complex to be modeled in all detail. This certainly also holds for systems such as industries, economies and markets. In physics this leads to the widespread use of macroscopic models in which only part of the microscopic details of the system is taken into account. A reduced information picture of the system is developed. This approach leads to limitations to the predictability of the future behavior and to limitations to the extent to which potential value can be made free from the sources of value in the system. The extent of this loss is characterized by the statistical entropy of the macroscopic description. The ambition of the author is to develop a consistent theory of evolving systems with special reference to industries, markets and economies. We show that the basic driving force behind the transactions that take place in our markets, industries and economies rest on the creation and maintenance of asymmetries in information. Furthermore, the value (and the cost) of the information is quantitatively defined in terms of the concept of statistical entropy. This results in a general value transaction theory to be applied to socioeconomic systems.
v
This basic formalism is applied to systems in which asymmetries in information exist and develop. The theory is analyzed in terms of accepted economic theories such as the perfect competition model, transaction costs economics, the concept of dynamic capabilities and the evolutionary approaches to organizations. Particularly evolutionary approaches are seen as promising and the theories of evolution and complexity are analyzed from the perspective of physical, chemical and biological systems. It is then argued that these approaches can be generalized, evolution is, as said, a general feature of complex systems of which we can only have a limited information picture. This leads to the conclusion that the application of evolutionary approaches to markets, industries and economies does not have to be understood in terms of an analogy with biological evolution but as a reflection of a general evolution theory of complex systems. We argue that there are both similarities and differences between biological and socioeconomic evolution. In the last chapter of the book the theory is analyzed in terms of a number of characteristics of industries and markets. The theories underlying the approach (thermodynamics of complexity, information theory, statistical thermodynamics, theory of evolution,) are not free from mathematical intricacies. This book tries to avoid mathematical intricacy as much as possible without sacrificing rigor. Most of the concepts are discussed in a verbal way to explain the mathematical formalism to a multidisciplinary community of readers. The main distinguishing feature of this book is that it develops a conceptually and mathematically consistent framework for the existing concepts used in organizational economics in a way that should be accessible to readers that are not familiar with modeling approaches in physics, chemistry and biology. Some parts of the present literature in ³HFRQRSK\VLFV´ODFNWKDWFRQVLVWHQF\DQGDFFHVVLELOLW\7KHDXWKRUKRSHVWKDW this book will thus augment on and complement existing approaches in the literature on organizational economics and evolutionary approaches to organizations.
vi
TABLE OF CONTENTS
1. INTRODUCTION 1.1. General introduction
1
1.2. Industry: Competition, growth and decay
2
1.3. The nature of value and value transaction processes
4
1.4. Self organizing systems: Dissipative structures
6
1.5. The microscopic foundations of the macroscopic description
8
1.6. Organization of this work
9
1.7. Literature cited
10
2. OUTLINE OF MACROSCOPIC THERMODYNAMICS 2.1. Introduction
11
2.2. Macroscopic balance equations: The first and second law of thermodynamics
11
2.3. Constraints due to the first and second laws of thermodynamics
13
2.4. The Carnot cycle
16
2.5. Conclusion
18
2.6. Literature cited
19
3. INFORMATION THEORY, STATISTICAL THERMODYNAMICS AND VALUE TRANSACTION THEORY 3.1. Introduction
20
3.2. Information theory
20
3.3. The formalism of statistical thermodynamics
23
3.4. Towards a general value transaction theory
28 vii
3.5. Conclusion
31
3.6. Literature cited
33
4. THE CAPITAL ASSET PRICING MODEL AND VALUE TRANSACTION THEORY 4.1. Introduction
34
4.2. The capital asset pricing model
34
4.3. A comparison of CAPM and VTT
39
4.4. Conclusion
43
4.5. Literature cited
44
5. THE TRANSFORMATION OF RISK INTO VALUE 5.1. Introduction
45
5.2. 7KHJHQHUDOL]HG³&DUQRW&\FOH´
46
5.3. An elementary free value transducer: The concept of price
48
5.4. A generalized market transaction
50
5.5. Conclusion
52
5.6. Literature cited
54
6. THE LINEAR FREE VALUE TRANSDUCER
viii
6.1. Introduction
55
6.2. Production of statistical entropy: The concept of rates and forces
55
6.3. The linear phenomenological equations
57
6.4. The second law of value transaction revisited
58
6.5. The linear free value transducer
59
6.6. The linear free value transducer and the concept of maintenance dissipation
66
6.7. Conclusion
67
6.8. Literature cited
69
7. DISSIPATIVE STRUCTURES: THE STABILITY OF STEADY STATES 7.1. Introduction
70
7.2. The stability of non-equilibrium steady states
70
7.3. Evolution and the linear free value transducer
73
7.4. Conclusion
74
7.5. Literature cited
76
8. THE NON-LINEAR FREE VALUE TRANSDUCER: SUSTAINED EVOLUTION 8.1. Introduction
77
8.2. Evolution beyond the linear range
78
8.3. The Benard problem and instability of the macroscopic branch
85
8.4. Instability of the macroscopic branch in chemical reaction systems
87
8.5. A generalized free value transducer
90
8.6. Evolution through fluctuation and selection: The general case
93
8.7. The starting point of biological evolution, prebiotic evolution
95
8.8. Competition and sustained evolution
95
8.9. The dynamics of competition
97
8.10. Biological evolution: Dissipative structures and information processing
99
8.11. Conclusion
101
8.12. Literature cited
103
ix
9. COMPETITION AND SELECTION: THE FIRM AND INFORMATION PROCESSING 9.1. Introduction
104
9.2. The dynamics of competition and selection
104
9.3. The dynamics of selection in Darwinian systems
106
9.4. Information content and the error threshold in evolution
109
9.5. Models of Darwinian systems: The Hypercycle
112
9.6. Competition and selection: An approach based on VTT
114
9.7. The nature of the firm and its evolution
119
9.8. Differences and similarities between biological and economic evolution
121
9.9. Conclusion
122
9.10. Literature cited
124
10. ECONOMIES, MARKETS AND INDUSTRIES: VALUE TRANSACTION THEORY
x
10.1. Introduction
125
10.2. The firm as an information processing structure
126
10.3. Competition and risk
128
10.4. The industry life cycle
131
10.5. Industry structure: The nature of entry barriers
135
10.6. Value transaction theory and the value chain
137
10.7. The internal value chain
139
10.8. Aspects of competence, technology and R&D management
141
10.9. An evolutionary view of corporate development
147
10.10. Aspects of Joint Ventures, Divestitures, Mergers and Acquisitions
152
10.11. Strategy Development
154
10.12. Economic cycles and fluctuations and the dynamics of systems
155
10.13. Conclusion and prospects
159
10.14. Literature cited
163
Glossary of salient terms
165
xi
This page intentionally left blank
CHAPTER 1: INTRODUCTION. 1.1. General introduction. 7KH REMHFWLYH RI WKLV ZRUN LV WR GHYHORS D WKHRU\ RI ³WUDQVDFWLRQV´ PRUH VSHFLILFDOO\ transactions in economies, markets and industries. The theory will cover a range of social and economic processes such as market transactions, the emergence, growth and decay of industries, the competition between industries and the dynamics of industry structure. To achieve this, the force that drives transactions is identified. This force drives the establishment of organized structures, such as industries and markets, by a process of self-organization. This self-organization process leads to the establishment of increasingly complex structures called GLVVLSDWLYHVWUXFWXUHVEHFDXVHWKH\QHHGDFRQVWDQWVRXUFHRI³IXHO´WRPDLQWDLQWKHLULQWHJULW\ Examples of evolutionary approaches to socioeconomic problems have appeared in the literature (Nelson and Winter (1982), Dopfer (Ed.) (2005), Wit (2003), Nelson (1987), Beinhocker (2007)). In this respect, the approach in this book is not new. What is new is the development of a consistent mathematical framework. The author has chosen not to review the literature, but rather to develop a new, internally consistent approach, based on established physics inspired principles. An important source of inspiration for this work is the theory of macroscopic thermodynamics. It is the theory of the transformation of matter and other forms of energy. In my earlier work, macroscopic thermodynamics was extensively used to understand the efficacy of growth and product formation in microorganisms (Roels (1980, 1983)). Microorganisms are organized forms RIPDWWHULQZKLFKPDQ\PROHFXODUVSHFLHV³FRRSHUDWH´ The theory of non-linear non-equilibrium thermodynamics analyzes self-organization phenomena. In non-linear thermodynamics self-organization and evolution phenomena are an important topic in the last odd fifty years. In systems away from equilibrium ordered structures appear that may increase in complexity if the distance from equilibrium grows. The emergence of these dissipative structures depends, as stated above, on the availability of a source of energy that is used to grow and maintain these non-equilibrium structures. Complete accounts of the relevant theories are well documented (Glansdorff and Prigogine (1971), Nicolis and Prigogine (1977), Prigogine (1980), Prigogine and Stengers (1984)). This book summarizes the most important concepts of non-equilibrium thermodynamics and shows their application to the description of market transactions and the establishment and dynamics of (industrial) organizations. The formalism of thermodynamics is extended to include these systems. The approach in this book reconciles the well established approach to thermodynamics that is deeply rooted in the physical sciences, with some of the principles from biological evolution, aspects of strategic management (Porter (1980, 1985)), concepts of organizational economics (Barney and Ouchi (1986), Douma and Schreuder (2008)) and the theory of capital markets (Jensen (1972), Sharpe (1970), Elton and Gruber (1984)). Chapters 2, 3, 5-8 treat the thermodynamics inspired formalism and show its generalization to socioeconomic systems. The theories discussed are mathematically intricate. However, we discuss most of the results verbally to assist the reader in understanding the concepts behind the mathematics. As a final remark to this general introduction, we stress that the application of evolutionary principles to socioeconomic systems is not to be interpreted in terms of an analogy with biology, but as a general feature of an evolutionary systems approach. Both economies and organisms are examples of complex systems that operate removed from equilibrium, beyond a critical limit. In such systems, evolution becomes a necessary feature. In chapter 9 where we conclude on the nature of the firm this is discussed in more detail.
1
Introduction
1.2. Industry: Competition, growth and decay. Standard classical microeconomic theory introduces the perfect competition model (Williamson (1971), Barney and Ouchi (1986), Baumol et al. (1982), Chamberlin (1933), Hirshleifer (1976)). This model is based on the following assumptions: x There exists a large number of buyers and sellers x Products and services from different sources are the same to the buyers, there is no product differentiation. x There are no costs associated with market transactions These assumptions lead to a situation in which, after initial transients, equilibrium is reached. This state is characterized by: x Supply and demand are balanced x The players in the market earning only normal profits, just enough to keep their assets in operation x $³VRFLDOO\RSWLPDO´VLWXDWLRQLQZKLFKUHVRXUFHVDUHHIILFLHQWO\DOORFDWHG Coase (1937) noted a further consequence of the traditional microeconomic theory. According to this theory, organizations such as industrial corporations should not exist; the market is a far more effective way to arrange transactions. This strongly reminds of the results of thermodynamic theory. In a system at thermodynamic equilibrium, ordered structures cannot be maintained. This argument and its consequences are further developed later in this book. There are, in addition to the fact that industrial organizations exist, several other problems with the classical microeconomic approach: x There are often differences in the products and services (in the remainder of this book we will indicate products and services collectively by products) that are offered on the market. Apparently, there is room for product differentiation. This finds no place in a perfect competition world where efficient suppliers supply one optimal product. x The existence of firms, individuals and organizations that earn above normal profits, which again finds no place in the world of perfect competition. In the chapters that follow the perfect competition world shows to be rather dull. Reality is far more exiting. We observe a world where firms and over average profits exist or, more correctly phrased, where an environment is created in which firms and over average profits exist. This treatment now continues with the development of a model of the firm. This verbal model serves as a stepping-stone for the development of more quantitative reasoning. Industrial activities derive from the fact that the products needed by society cannot be adequately sourced from the environment in a direct way. In addition, the need for specialization and the need to rely on teams of specialists to efficiently collect or produce the products importantly contribute to the appearance of firms. Finally, there can be significant economies of scale and scope that drive the emergence of industrial activity.
2
Introduction
,Q VXPPDU\ ³KXQWHU-JDWKHUHU´ VWUDWHJLHV LQ ZKLFK LQGLYLGXDOV REWDLQHG WKH SURGXFWV WR supply their needs directly from the environment, supported the evolution of humankind for only a limited period. Conversion of resources to the products needed, initially in an artisanal
Information
Resources
Markets
Fig. 1.1. A schematic Model of Industry way, later on increasingly in firms, started to dominate the market. In this evolution industrial activity increasingly depended on distinctive assets and competences (such as refined technological capabilities) to produce and supply in an efficacious way. Fig. 1.1 depicts a model of industry. Industry is using (captive) information and associated assets and competences to source materials, information and services from the environment and to transform these into products that supply a market need. This process adds value as perceived by the buyer. If the firm operates well the buyer is prepared to pay more than the amount the firm has to spend to produce the product. In this way, the firm obtains a profit. It can invest part of this profit to generate new competences or assets and to develop new processes and products. This causes the information set of the industry to change. In this model, industry emerges because of the needs that exist in the market. Industry develops the information set needed to efficiently supply products that satisfy those needs. Another important concept that shapes industrial activity is competition. Generally, several firms are willing and able to supply products that satisfy a need. These firms compete for the value associated with satisfying that need. The firms are not equally efficacious and because of competition, some firms may grow their share of the market whilst others are less successful. In addition, shaped by the forces of competition an evolution of the industry takes place. This leads to the familiar life cycle of an industry or a market (Fig. 1.2.). A new industry or market emerges because of an innovation in the resource base of the industry, the competences and assets base, the way of supplying the target need, or a combination of these. This triggers the so-called embryonic phase of the industry. In the subsequent growth phase of the industry the forces of competition shape an evolution in which a process of learning by doing triggers the development of increasingly efficacious ways of satisfying the target need. The market grows in value. In the growth phase both the development of better products (product innovation) and the development of better processes to produce a given product (process innovation) take place. In most cases, the number of players in the industry will increase. A phenomenon called evolutionary radiation in biology.
3
Introduction
Market value Embryonic
Growth
Mature
Aging Time
Fig. 1.2. Industry Evolution: The life cycle In the further evolution of the industry, when we enter the mature stage, further optimization of products and processes becomes increasingly difficult. A unit progress needs more and more effort. The growth of the value of the market stagnates. The number of players decreases by forcing out the less effective ones. Finally, the industry may enter the decay phase. This is often caused by an innovation that changes the way the target need is satisfied. Sometimes the target need disappears. Fundamentally better or cheaper concepts substitute the existing products. An important feature of this model is that an industry structure does not exist in the way Porter discussed this in his influential publications (Porter (1980, 1985)). The industry structure is a dynamic consequence of evolution under the forces of competition. Note 1.1. We use the term industry for a group of products and underlying industrial activities that supply one defined need in society. Innovation that leads to the decay of an industry does not necessarily mean that the existing players decay with the industry. It may be one of the leading players in the old industry that pioneers the new approach. However, often a period of fundamental innovation gives opportunities to new entrants and poses a threat to the companies that were well entrenched in the old industry 1.3. The nature of value and value transaction processes. The ambition of this work is to lay the foundation for a quantitative approach towards the development of industries and markets. Therefore, it is necessary to identify the driving force behind industry evolution. Chapters 3, 5 and 6 present the theory to achieve this. The model derives from the notion that a need in society allows a corporation to earn a profit if it succeeds in more or less efficiently satisfying that need. The need presents an opportunity to successful companies. To develop a quantitative approach we introduce a number of concepts, such as:
4
Introduction
x The distinction between true value and so-called free value. True value is the intrinsic value of an asset or a product. Free value is the value to an actor that has only limited information about the asset. This distinction is an analogue of the distinction between internal energy and free energy in thermodynamics and the concepts underlying the Capital Assets Pricing Model of finance (Chapter 4). Free energy is the amount of energy that is available to do useful work. It differs from internal energy due to the limitations of the information about the exact state of the system. This leads to the situation that only a limited amount of the internal energy is available to do useful work. x The distinction between true value and the price obtained in the market. This in contrast to the perfect competition model in which price is equal to value The model predicts a difference between price and value. This is a direct consequence of the fact that industries create asymmetries in information that lead to a non-equilibrium situation and different perceptions of free value to buyers and sellers. The relation between value, free value and price depends on the incomplete information the various players in the market have about the true value of resources and assets and the asymmetries in that information between actors involved in a transaction. As stated, the structure of the model bears close resemblance to macroscopic non-equilibrium thermodynamics. Free energy is a concept that measures the value of a source of energy to drive processes in physical and chemical systems. An analogous concept can specify the free value of an asset in terms of its ability to drive a market transaction. Thermodynamics is the prime example of a reduced information theory, a so-called macroscopic theory. These theories provide a simplified picture of reality fitting the limited information the observer has about the system. Nevertheless, these theories remain valuable to make some predictions about the likely behavior of a system. Reduced information theories are used in physics and chemistry because the overwhelming majority of the systems of interest to physicists and chemists are far too complex to allows analysis in detail. Real systems exist of a vast number of molecules, atoms and sub-atomic particles. Structures are complex and intricate patterns of interaction are common. As an example, a piece of iron of 56 grams contains 6.1023 atoms and a detailed analysis in terms of this vast number of atoms is impractical and impossible. Clearly, a complete picture, a full information description, is impossible to a non-divine observer. In the physical sciences, a distinction is made between a microscopic description, in which the detailed and intricate microscopic structure of reality is taken into account, and a macroscopic description in which only limited information about the salient averaged properties of a large collection of microscopic objects is considered. This is the subject of statistical mechanics. Chapter 3 gives an outline of this theory and generalizes it to economic transactions. The macroscopic description is typified as a reduced information approach. The fact is considered that we have only limited information about the exact microscopic structure of the system of interest. This is caused by the limitations of the observation process and the related uncertainty. The macroscopic description contains far less information than is needed to specify the future evolution of the system in all microscopic detail. A macroscopic description introduces the uncertainty about the exact microscopic state of the system. Hence, also the future evolution is subject to uncertainty. This uncertainty emerges as a direct consequence of the reduced information description of the system. Thermodynamics quantifies the uncertainty as the entropy of the reduced information description, a statistical notion. Macroscopic thermodynamics shows that in systems away from equilibrium, new structures, so-called dissipative structures, appear because of self-organization phenomena. Beyond a well defined limit these structures may evolve into ever increasing complexity. This 5
Introduction
development brings evolution of complex structures in an initially unstructured system within the realm of macroscopic theories. An example of such a system is the biosphere on earth including human society with its markets and organizations such as firms. The next section briefly discusses the theory of dissipative structures. Chapters 6, 7, 8 and 9 provide far more detail. This paves the way to extend the formalism of macroscopic thermodynamics to include the description of economic aspects of the evolution of firms and markets. 1.4. Self organizing systems: Dissipative structures. Macroscopic thermodynamics leads to the conclusion that an isolated system, that is a system that does not exchange material or energy with the environment, evolves to a final state called thermodynamic equilibrium. At thermodynamic equilibrium the state in terms of the reduced information macroscopic description does no longer change. Macroscopic changes can no longer be observed in the system. Changes at the microscopic level continue to take place, but these cannot be observed in the reduced information description. Note 1.2. The macroscopic description is a reduced information description of a system. This implies that often a vast number of microscopic states are observed as one PDFURVFRSLF VWDWH 3URFHVVHV WKDW WUDQVIRUP WKH V\VWHP¶V state to another macroscopically indistinguishable microstate do not appear to take place to the reduced information observer. This can be compared to a poker game in which the types (clubs, diamonds, hearths or spades) of the cards are indistinguishable. In such a game there would be far less different hands and opportunities to win or lose than in a normal game where clubs, diamond, hearths and spades are observed as different. Thermodynamic equilibrium is a state in which, given our macroscopic information, we have reached maximum uncertainty about the microstate the system is in. The lack of information of the observer has reached a maximum. The second law of thermodynamics highlights this. It can be phrased in a number of different ways. An example is the statement that a system evolves towards a state of maximum disorder or minimum organization. The second law predicts the direction in which the evolution of a system takes place in the eyes of a macroscopic observer. The realm of the second law will be extended to include the description of economic transactions and organizations such as firms. Biological systems, markets, economies and industrial corporations are characterized by some kind of organization. The existence of galaxies, the earth, life and biological evolution, human civilizations, economies, industrial enterprises and markets shows that evolution proceeds towards increasing complexity. This is in apparent conflict with the second law that predicts an evolution towards maximum disorder or decreasing organization and complexity. Many early investigators have assumed that life, human society and evolution belong to a class of phenomena that escapes the second law. Developments in thermodynamics during the last fifty years have highlighted that in systems that are not isolated but exchange matter and/or energy with the environment, an evolution takes place in which the development of order and organized structure becomes a direct consequence of the second law. What is required is to move away from equilibrium. The increasingly complex ordered structures that evolve have been termed dissipative structures. Dissipative structures can indeed only evolve and be maintained if matter and/or energy can be exchanged with the environment. Also dissipative structures are both a product and the source of non-equilibrium and constantly evolve rather 6
Introduction
than exist in a given state. This latter phenomenon is highlighted in a book of the Nobel ODXUHDWH3ULJRJLQH ,WLVWLWOHG³)URP%HLQJWR%HFRPLQJ´ Note 1.3. The classical example of the difference between dissipative structures and equilibrium structures is the distinction between a snowflake and a microorganism. Snowflakes are structures composed of ice crystals and have a beautiful ordered appearance. Snowflakes are equilibrium structures. Microorganisms on the other hand, being a marvel of order at the molecular level, need to be supported by a constant flow of a source of nutrition such as sugar. Microorganisms are dissipative structures. The same applies to a city. A city only maintains its structure and order if a continuous flow of food and energy can be obtained from the environment. Evolution of ordered structures of increasing complexity is a necessary consequence of the VHFRQGODZ³2UGHURXWRI&KDRV´DVLWKDVEHHQWHUPHGE\3ULJRJLQHDQG6WHQJHUV LQ their illuminating book. The basic prerequisites for the appearance of dissipative structures, as is more fully discussed in chapters 6, 7, 8 and 9, are the following:
Variation Information
Selection
Resources
Organization
Competition
Fig. 1.3. The evolution of dissipative structures: Learning systems
x A sufficiently large exchange of matter and/or other sources of energy and information is needed to create a sufficient distance from equilibrium. The basic flux that feeds industrial firms has been identified in fig. 1.1: The need for products in the market. In chapters 2, 3, 5, and 6 we will develop the formalism to describe this driving force. x The processes that take place in the system need to be characterized by non-linear kinetics e.g. autocatalytic phenomena, in which a system enhances its own rate of growth. Competition, growth and decay are examples of processes with such characteristics. This is extensively discussed in chapters 7, 8 and 9. 7
Introduction
x The system must be capable to store information about its own structure and its operations. x Information is transmitted and copied by operations with a limited copying fidelity. 7KLV PD\ EH GXH WR FRS\LQJ HUURUV RU ³H[SHULPHQWV´ LQ ZKLFK WKH LQIRUPDWLRQ LV deliberately changed. x ThHFRPELQDWLRQRIWKHODVWWZRSRLQWVOHDGVWR³OHDUQLQJE\GRLQJ´DVDFKDUDFWHULVWLF feature of dissipative structures. This will be highlighted in chapter 8 as the source of evolution. In chapter 10 we will discuss this learning by doing development of industries and economies as dissipative structures in the perspective of the neoclassical approach to economics and the concepts of exogenous and endogenous growth models that consider the impact of technologies on the growth of economies (Romer (1990), Solow (1956), Aghion and Howitt (1998)). Fig. 1.3 summarizes these characteristics. The central entity that drives the cycle depicted in fig. 1.3 is a (captive) information set. It codes for the organization of the structure in both its tangible and its intangible aspects. The organization contains the assets, again both tangible and intangible that allows it to produce the products and services that allows the entity to contest the market. This process of competition leads to selection at both the level of the organizations and their information sets. This leads to an evolution of the collective of information sets of the organizations supporting the market need. 1.5. The microscopic foundations of the macroscopic description: Statistical entropy. In the preceding section we indentified the firm as a dissipative structure of an information processing nature. Chapter 3 introduces the microscopic foundations of the macroscopic description and introduces a quantitative description of information and its processing. We define the information that is lacking in the macroscopic description to fully specify the microscopic picture of the system. This introduces the concept of statistical entropy, I it is given by:
I
K ¦ pi ln pi
(1.1)
i
In which p i is the probability that the system is in value microstate i considering the macroscopic, reduced information, picture that we have of the system. K is a constant that we specify in chapter 3. We continue by defining the distinction between free value and true value as discussed earlier. Chapter 3 shows that free value, G is related to true value, W by the expression:
G W CI I
(1.2)
Where C I is the cost of information about the microstate of the system. This information has value and comes at a cost. It is the equivalent of absolute temperature in thermodynamics. We further show that this leads to the definition of a force that drives economic transactions. This is a vital element of the formulation of a macroscopic theory of evolutionary economics. The force is given by: X
8
'(
G ) CI
(1.3)
Introduction
The force is a difference in the ratio of free value to cost of information. It is at this stage important to stress that the reduced information picture of reality highlighted by the statistical entropy of the description of reality comes at a cost. The macroscopic description and the theory of economies, markets and industries developed in this work can explain evolution of markets and industries and the force that drives these systems to increasing complexity. However, due to the very limitations of the information of the macroscopic observer, it has to remain silent about the detailed nature of the evolution of the systems. The theory does provide the formalism to develop detailed tailor made models for specific systems. Such endeavors are beyond the scope of this book. What can be stated is that no matter how complex and detailed these models are it will never be possible to fully grasp the complexity of reality. Hence unexpected behavior always lures around the corner. This may be conceived frustrating to some of the readers. However, it has to be realized that this inherent uncertainty and associated risk is the substance that drives progress and creates attractive and profitable markets and industries. 1.6. Organization of this work. After this introductory first chapter that sets the scene of this work, we introduce the formalism of macroscopic thermodynamics in chapter 2. This chapter discusses the first and second laws of thermodynamics and the concepts of energy, entropy, free energy and temperature. It also starts identifying some of the restricting to the transformation of energy and matter. Chapter 3 introduces information theory and statistical thermodynamics and the statistical interpretation of the macroscopic concepts introduced in chapter 2. It also extends thermodynamics to include socioeconomic systems by the introduction of a value transaction theory. Chapter 4 compares value transaction theory with the Capital Asset Pricing Model of finance. The later model is widely accepted for the analysis of investment decisions subject to uncertainty. Chapter 5 uses the newly developed value transaction formalism to the transformation of risk into value and starts identifying the limitations to such transformations. This also sets the scene for further discussing the force that drives economic transactions. Chapter 6 introduces the linear free value transducer. It discusses transactions in the nonequilibrium region but restricts itself to situation close to equilibrium where structures exist that are intrinsically stable and are not subject to evolution into increased complexity. Chapter 7 explicitly discusses the stability of structures in the near equilibrium linear region and starts identifying the limitation to their stability. In chapter 8 we relax the limitation to the near equilibrium linear region. We show examples of sustained evolution in physical and socioeconomic systems, highlight biological evolution and show the relevance of the concepts to socioeconomic phenomena in more detail. Also the limitation and prospects of the formalism are debated. In chapter 9 we identify the nature of firms and markets in terms of value transaction theory and sustained evolution. Finally, chapter 10 presents a qualitative discussion of some well established concepts of economic theory, organizational economics and business strategy.
9
Introduction
1.7. Literature cited. Aghion P., P. Howitt (1998), Endogenous Growth Theory, MIT Press, Cambridge (MA) Baumol W.J., J.C. Panzar, R.P. Willig (1982), Contestable Markets and the Theory of Industry Structure, Harcourt, Brace Janovich, New York Barney J.B., W.G. Ouchi (Eds.) (1986), Organisational Economics, Jossey-Bass Publishers, San Francisco Beinhocker E.D. (2007), The Origin of Wealth, Random House Business Books, London Chamberlin E.H. (1933), A Theory of Monopolistic Competition, Harvard University Press, Cambridge, Mass. Coase R.H. (1937), The nature of the firm, Economica New Series, 4(16), 386-405 Dopfer K. (Ed.) (2005), The Evolutionary Foundations of Economics, Cambridge University Press, Cambridge (UK) Douma S., H. Schreuder (2008), Economic Approaches to Organizations, Fourth Edition, Pearson Education, Harlow (UK) Glansdorff P., I. Prigogine (1971), Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley Interscience, New York Hirshleifer J. (1976), Price Theory and Applications, Prentice-Hall, Englewood Cliffs, N.J. Jensen M.C. (1972), Capital Markets: Theory and Evidence, Bell Journal of Economics and Management Science, 3(2), 357-398 Nelson R.R., S.G. Winter (1982), An Evolutionary Theory of Economic Change, Belknap Press of Harvard University, London (MA) Nelson R.R., Understanding Technical Change as an Evolutionary Process, North Holland, Amsterdam (1987) Nicolis G., I. Prigogine (1977), Self-Organization in Non-Equilibrium Systems, J. Wiley Sons, New York Porter M.E. (1980), Competitive Strategy: Techniques for Analyzing Industries and Competitors, Free Press, New York Porter M.E. (1985), The Competitive Advantage, Free Press, New York Prigogine I. (1980), From Being to Becoming, W.H. Freeman, San Francisco Prigogine I., I. Stengers (1984), Order out of Chaos, Bantam Books, New York Roels J.A. (1980), Application of Macroscopic principles to microbial metabolism. Biotechnology and Bioengineering, 22:2457±2514. Roels J.A. (1983), Energetics and Kinetics in Biotechnology, Elsevier Biomedical Press, Amsterdam Romer P. (1990), Endogenous Technological Change, Journal of Political Economy 98, S71S102 Sharpe W.F. (1970), Portfolio Theory and Capital Markets, Mc Graw-Hill, New York Solow R. (1956), A Contribution to the Theory of Economic Growth, Quarterly Journal of Economics 70, 65-94
10
CHAPTER 2. OUTLINE OF MACROSCOPIC THERMODYNAMICS. 2.1. Introduction. This chapter presents a broad brush description of macroscopic thermodynamics. The reader is referred to Roels (1980, 1983) for additional detail and further references. As stated before macroscopic thermodynamics is a reduced information theory of a very complex microscopic reality, as such it does justice to the reduced information picture of the observer. The remaining sections of this chapter develop the basic formalism. 2.2. Macroscopic balance equations: The first and the second law of thermodynamics. Fig. 2.1 presents the system for thermodynamic analysis. Thermodynamics introduces macroscopic quantities, i.e. quantities that are averaged over a large number of microscopic states that are grouped together in one macroscopic state. Examples are energy E, entropy S, pressure p, absolute temperature T, volume V, amount of chemical species i, ni . This reflects the reduced information description used in macroscopic thermodynamics. Chapter 3 discusses this in more detail.
)i
)E
E, S , p, T ,V , ni
)S
)P
)Q
Fig. 2.1. System for thermodynamic analysis. Macroscopic thermodynamics starts with the definition of a scarce macroscopic quantity called energy, E. The amount of an extensive macroscopic quantity, i.e. a quantity additive with respect to parts of the system (the system being a part of reality that is delineated for further study) changes by two different types of processes, transformation processes, and transport processes, in words:
Increase of amount = transformation + transport
If we formulate this in mathematical terms for energy, E, the result is:
11
Outline of macroscopic thermodynamics
dE dt
3E )E
(2.1)
dE is the change of the amount of E in the system per unit time, 3 E is the rate of dt production of E due to transformation processes in the system, and ) E is the rate of transport of E from the environment to the system. The first law of thermodynamics states that energy is a conserved quantity, i.e. it cannot be produced or destroyed in the transformation processes open to the system, in mathematical terms:
Where
3E
0
(2.2)
Thermodynamics further identifies three types of contributions to the transport of energy to the system:
) E ) Q ) P ¦ ) i hi
(2.3)
i
Where ) Q is the heat flow to the system, ) P is the work performed on the system, ) i is the flow of compound i to the system and hi is the enthalpy of compound i , a measure for the energy contained in one unit of compound i. If no work is performed on the system eqn. 2.3 reduces to:
)E
) Q ¦ ) i hi
(2.4)
Thermodynamics further introduces the second law of thermodynamics. It defines a function of state entropy S . The balance equation for S is, following the general form of the balance equation for any extensive quantity, written as the sum of a transformation term and a transport term: dS 3 S ) S dt
(2.5)
S is the entropy of the system, 3 S is the rate of entropy production in the processes in the system and ) S is the rate of transport of entropy from the environment to the system. The second law formulates a restriction to the entropy production in the processes taking place in the system. It must be positive or equal to zero: 3S t 0
(2.6)
Thermodynamics proceeds by introducing the following expression for the entropy exchange between the system and the environment:
)S
12
)Q T
¦ ) i si i
(2.7)
Outline of macroscopic thermodynamics
T is the absolute temperature and s i is the entropy of one unit of substance i .
Note 2.1. The first term at the right hand side of eqn. 2.7 derives from the fact that for a system that is exchanging heat with the environment at only one level of temperature, 1/T is a so-called integrating factor which relates the flow of heat to the rate of change of the function of state entropy. Heat in itself is not a function of state; we cannot define an ³DPRXQWRIKHDW´DVDSURSHUW\RIWKHV\VWHP Note 2.2. The nature of entropy is quite elusive. Entropy can be shown to be of a statistical nature, it reflects the lack of information of the observer about the exact microstate the system is in. This is caused by the fact that the observer has only limited macroscopic LQIRUPDWLRQDERXWWKHV\VWHP¶VGHWDLOHGVWDWHDWWKHPLFURVFRSLFOHYHO:HZLOOGLVFXVV this more extensively in chapter 3. As stated the second law of thermodynamics is quite elusive and over the years its consequences were formulated in a number of logically equivalent forms. One formulation is: In a system that is not in equilibrium the processes occurring tend to increase the entropy of the universe. Or, with reference to the system: In a system that is isolated, i.e. does exchange nothing with the environment, the entropy increases until thermodynamic equilibrium is reached. The law also implies that heat does not spontaneously flow from a cold to a hot body. This confirms our experience with a refrigerator. In order to remove heat at a low temperature we need an input of work, electricity. Another consequence is that heat cannot be completely transformed into work in a cyclical process. Clearly heat is lower quality energy than work. Work can be completely transformed into heat, the reverse is not possible. It follows that the second law provides the arrow of time in physics. It defines the direction in which processes take place if time proceeds. There is also a relation between organization and the second law. If an organized form of matter such as a building is left alone without further work performed on it, it will gradually but surely disintegrate. Buildings turn into ruins; the reverse process needs the input of work. 2.3. Constraints due to the combined first and second laws of thermodynamics. The next step is to introduce the Gibbs equation as an equation of state for energy; it is, ignoring work performed on the system, written as:
E
TS ¦ g i ni
(2.8)
i
Where g i is the Gibbs free energy per unit substance i and ni is the amount of compound i in the system. Considering a system at constant temperature and taking the time derivative of both sides of eqn. 2.8 results in:
13
Outline of macroscopic thermodynamics
dE dt
T
dn dS ¦ gi i dt dt i
(2.9)
Introducing a balance equation for the amount of substance i in the system, involving the usual transformation and transport terms, results in: dni dt
Ri ) i
(2.10)
Where R i is the net rate of production of compound i in the transformation processes in the system, and ) i is the rate of transport of compound i to the system. Combining eqns. 2.10 and 2.9 and some rearrangement results in:
dS dt
1 dE ( )( ¦ g i ( Ri ) i )) T dt i
(2.11)
With reference to eqn. 2.5 it follows:
3s
1 dE ( )( T) s ¦ g i ( Ri ) i )) T dt i
(2.12)
Taking into account the following expression for the time derivative of E, by combination of eqns. 2.1, 2.2 and 2.4:
dE dt
) Q ¦ ) i hi
(2.13)
i
And, from eqn. 2.7:
T) S
) Q T ¦ ) i si
(2.14)
i
Combining eqns. 2.12, 2.13 and 2.14 leads to:
¦ Ri g i
T3 S
(2.15)
i
In which g i , the Gibbs free energy per unit substance i , equals hi Tsi . Note 2.3. It is useful to rephrase eqn. 2.15 in an analogous but slightly different way, in this way a result is obtained that we use in chapter 6. The net production of compound i can be interpreted in terms of a reaction pattern of several chemical reactions that take place in the system. The stoichiometry of the j-th reaction can be expressed as:
¦D r
14
rj
a j o ¦D pj a j p
Outline of macroscopic thermodynamics
In this reaction equation the a j represent the chemical compounds relevant to the description of the system. The constants D ij are the stoichiometric constants, by convention these are defined negative for reactants, i.e. compounds that appear at the left hand side of the reaction equation, for products, appearing at the right hand side the constants are positive. Each of the reactions can now be characterized by its reaction free energy, defined as: ¦D rj g r ¦D pj g p
'g j
r
p
Defined in this way a positive 'g j represents a reaction that proceeds in the natural direction, it is equivalent to the affinity A j as introduced by the Belgian scientist De Donder. Eqn. 2.15 can, using this result, also be written as:
3S
gj
¦ r '( T ) j
j
Using the equation for 'g j this can also be written as: 3S
¦¦D ij j
i
gi rj T
In this equation r j is the rate of the j-th reaction. The equation shows that, due to the fact that the entropy production must exceed or be equal to zero, the net effect of a reaction pattern must be a decrease in free energy. The reader should note that the second law does not pose restriction to the individual reactions in the system. Uphill reactions, i.e. reactions in the direction of increasing free energy are perfectly possible if these are coupled to reactions with a sufficiently large decrease in free energy. This phenomenon of coupling is shown to be a very important principle in the remainder of this book. As a prelude to chapter 6 we note that, if we introduce the affinity of De Donder, the foregoing equation can also be written as:
3S
Aj
¦T
rj
j
This formulation reveals that the entropy production can be written as a sum of the Aj and the rates, r j of the transformations that are driven by these products of forces, T forces. On combining eqn. (2.15) with the second law of thermodynamics as phrased in eqn. (2.6), the following restriction results:
15
Outline of macroscopic thermodynamics
¦R g i
i
d0
(2.16)
i
In summary, the application of the laws of thermodynamics puts a restriction on the natural direction of the transformation processes inside a system. Eqn. 2.16 shows that natural processes proceed in the direction of decreasing free energy. The limiting case of equality to zero refers to a situation in which no transformation activity takes place in the system, i.e. where the system is at thermodynamic equilibrium. We discuss the implications of eqn. 2.16 later. The concept of steady state is introduced, this is a state in which the time derivatives of the values of the state variables such as E, S and the ni become equal to zero; in that case eqn. 2.10 leads to the following conclusion: )i
Ri
(2.17)
Combining eqns. 2.16 and 2.17 leads to:
¦) g i
i
t0
(2.18)
i
Eqn. (2.18) states that for a system in steady state the flow of free energy towards the system must be equal to or exceed the free value contained in the flows leaving the system, with the equality sign only applying to systems in thermodynamic equilibrium in which no transformation processes take place. This section presented a concise overview of some important results from macroscopic thermodynamics. This resulted in two constraints given by eqns. 2.16 and 2.18 and the concept of absolute temperature T. We further develop and refine these concepts in the next chapters. In this way a stepping stone for defining the forces that drive transformations and transactions in a wide variety of systems results.
2.4. The Carnot cycle. In thermodynamics the Carnot Cycle shows to be a useful concept to study energy transformations. As we have already seen the combined first and second laws put a restriction to the direction energy transformations naturally take. In this respect it is instructive to revisit two of the alternative formulations of the second law of thermodynamics: a) The transfer of heat from a low temperature environment to a high temperature environment cannot be the only process taking place in an isolated system. An isolated system was defined as not exchanging heat or any other source of energy, including matter, with its environment. b) It is not possible to transform heat completely into work. These statements put a restriction on the amount of work that can be obtained if heat is transferred between two temperature levels. We consider the process in fig. 2.2.
16
Outline of macroscopic thermodynamics
) Q1 , T1
)P Carnot Engine
) Q 2 ,T2
Fig. 2.2. The Carnot Engine A system is in contact with heat baths at two temperatures T1 and T2 , it exchanges heat with these environments at rates of flow ) Q1 and ) Q 2 respectively. Furthermore, work is exchanged with the environment at rate of flow ) P . The system operates in such a way that its state and hence its state variables such as entropy S , and energy E , do not change. Application of the first law shows: ) Q1 ) Q 2 ) P
0
(2.19)
0
(2.20)
Application of the second law results in:
) Q1 T1
) Q2 T2
3S
Combining eqns. 2.19 and 2.20 results in:
)P
) Q1 (1
T2 ) T2 3 S T1
(2.21)
Remember that the rate of flow of work was defined in the direction of the system, hence ) P is the work that is performed by the system on the environment. The second law requires the entropy production to exceed zero or to be equal to zero, hence there exists a definite maximum to ) P ; it is given by:
) P ,max .
) Q1 (1
T2 ) T1
(2.22)
It is clear from eqn. 2.22 that the work performed on the environment will be positive only if the following inequality holds:
T2 1 T1
(2.23)
17
Outline of macroscopic thermodynamics
Eqn. 2.23 implies that for positive work T2 should be lower than T1 , i.e. heat should flow from a high to a low temperature. Heat should flow in the natural direction defined by the second law. It is also clear that the work performed on the environment is always lower than the heat obtained from that environment. The thermodynamic efficiency of the process is now defined as K th. (it is constrained between 0 and 1), it is given by:
K th.
)P ) P ,max .
(2.24)
This section analyzed the Carnot engine; it is instrumental in the transformation of heat into work. Its broader relevance is discussed in chapter 5, Section 5.2. 2.5. Conclusion. This chapter summarized the basic framework of macroscopic thermodynamics. For more detail the reader is referred to the literature cited in the references mentioned. It introduced the concepts of energy, entropy and free energy, the first and second laws of thermodynamics and the nature of temperature as a so-called integrating factor for the heat flow. These principles result in restrictions to the transformations and the associated flows of work, heat and energy EHDULQJ VXEVWDQFHV WR DQG IURP WKH V\VWHP¶V HQYLURQPHQW ,Q WKH QH[W FKDSWHUV WKLV WKHRU\ shows to provide a useful stepping stone for the description of a wide range of phenomena.
18
Outline of macroscopic thermodynamics
2.6. Literature cited. Roels, J.A. (1980), Application of Macroscopic principles to microbial metabolism, Biotechnology and Bioengineering, 22:2457±2514. Roels, J.A. (1983), Energetics and Kinetics in Biotechnology, Elsevier Biomedical Press, Amsterdam
19
CHAPTER 3. INFORMATION THEORY, STATISTICAL THERMODYNAMICS AND VALUE TRANSACTION THEORY. 3.1. Introduction. Consider a system of a vast complexity at the microscopic level. We have only limited macroscopic information about its detailed microscopic state. The objective is to develop a reduced information description of the system. As a consequence of the reduced information picture we cannot be sure about the future development of the system. As an example we assume that W0 outcomes are consistent with the macroscopic information we have about the V\VWHP¶VSUHVHQWVWDWH As an example the system might be a portfolio of stocks and bonds in the capital market and the objective is to predict the value of that portfolio one year in the future. We only know that each of the whole number %-increases of the initial value between 8 and 15% is equally probable. We do, however, not know which increase will actually materialize after one year. This chapter presents a quantitative understanding of the amount of uncertainty which we have and a microscopic statistical interpretation of the macroscopic approach to complex systems. Furthermore, the microscopic interpretation of macroscopic thermodynamics and its statistical background is used as a stepping stone to provide a theory for a range of phenomena beyond classical physical systems. 3.2. Information theory. We return to the W0 equally probable outcomes discussed in the previous section. The amount of information needed to pinpoint the actual microscopic state of the system in the field of uncertainty created by the fact that we have only limited information needs to be defined. Classical texts on information theory (Shannon and Weaver (1949), Brillouin (1962)) lead to the following expression: I
K lnW0
(3.1)
Where I is the amount of information needed to pinpoint the microstate of the system, we conveniently express it in binary units, so-called binary information units, bits. One bit is the amount of information needed to discriminate between two equally probable outcomes. 1 if I is expressed in bits. K is a constant, it equals ln 2 Note 3.1. Eqn.3.1 and the value of K are easily put in perspective using the following example. Consider a pile of 8 coins. The coins are completely identical, except for the fact that one weighs more than each of the others that are of equal weight. A simple weighing instrument is available. It has two scales and allows us to judge, which of the scales contains the heavier object. In fact, the scales allow us the make one binary decision, i.e. to gather one bit of information. The question is now how many bits of information, i.e. how many weighings, we need to be able select the heavier one from the 8 equally likely candidates. Using eqn.3.1 and 1 it follows: K ln 2 20
Information theory, statistical thermodynamics and value transaction theory
ln 8 ln 2
I
The conclusion is that 3 bits of information are needed. This is easily verified to be correct. The first action is to split the 8 coins in two groups of 4. Weighing, gathering one bit of information, allows ascertaining in which group of 4 the heavier coin can be found. This group of 4 is now split in two groups of 2 and weighing, gathering a second bit of information, allows identifying the group of 2 containing the heavier coin. On final weighing, the third bit of information, allows identification which coin of the remaining 2 is the heavier one. This can be generalized as follows. Each binary information unit allows the selection of one outcome out of 2 equally probable ones. Thus I bits of information allow us to select the outcome out of W0 equally probable ones if the following equality holds: 2I
W0
Taking the natural logarithm of both sides of the preceding equation and elementary rearrangement results in: I
ln W0 ln 2
This is equivalent to eqn. 3.1, and it identifies the value for K if information is measured in bits. Inspection of eqn. 3.1 leads to the identification of an interesting property of the amount of information. If there are two statistically independent events characterized by an amount of uncertainty or required information of I 1 and I 2 respectively, the amount of information characterizing the joint events equals I1 I 2 . Amounts of information are additive with respect to independent events. Of course, only for events that are statistically independent the amounts of required information are additive. Note 3.2. To understand the addition of information somewhat better the following example may be a useful one. Consider a person who holds stocks in the companies A and B. He does not know exactly how much the value of these stocks will increase in one year. However, he does know, for example from previous experience, that these increases are independent and can have, with equal probability, each of the eight whole number increases between 8 and 15%. It is clear that the outcome of the portfolio of stocks A and B can have 64 different pairs of percentile increases of the stocks. Using eqn. 3.1 the amount of information needed to identify the outcome out of the uncertainty field of 64 equally probable ones is calculated to be 6 bits. For each of the stocks separately 8 outcomes are possible, hence 3 bits of information are needed to control each of the two fields of uncertainty separately. Indeed, if these two separate amounts of information are added, the amount of information characterizing the joint events is obtained. The situation is different if the prices of the 21
Information theory, statistical thermodynamics and value transaction theory
stocks are statistically dependent, i.e. if the outcome for the first stock already contains information about the outcome for the second stock. In that case the information is less than the sum of the informations for each of the stocks separately, i.e. the following inequality applies to the information needed for the joint stocks, I jo int : I jo int I1 I 2
As the next step we generalize eqn. 3.1 to the case where the various scenarios have different probabilities. For the general case the following equation specifies the information needed for certainty about the microstate:
¦ pi ln pi i
I
(3.2)
ln 2
In eqn. 3.2 p i is the probability of the system being in microstate i , considering the incomplete PDFURVFRSLFLQIRUPDWLRQZHKDYHDERXWWKHV\VWHP¶VVWDWH Note 3.3. If, as in our earlier example, there are W0 equally probable possible microstates considering our macroscopic information, the probability of each of the microstates is given by: 1 W0
pi
On substitution of this result in eqn. 3.2 the following expression for the amount of information is obtained: W0
1 1 ln W W 1 0 o ln 2
¦ I
i
Elementary rearrangement indeed results in eqn. 3.1. Note 3.4. A common distribution function that is used to describe the statistical characteristics of information is the so-called normal distribution function. For scenarios that are subject to a normal distribution the probabilities of the values x i are given by:
pi
22
1
V 2S
( x x )2 av. ) ( i 2 V 2 e
Information theory, statistical thermodynamics and value transaction theory
Where V is the standard deviation of the normal distribution, x av. is the average value of the scenario property x and x i is the value of the property x in the i-th scenario. Combining the normal distribution function with the information definition according to eqn. 3.2, shows that the amount of information characteristic to the normal distribution function is given by: (ln V (ln 2S 1) / 2)) ln 2
I
Taking into account that we can only measure entropy differences and that the only information needed to reconstruct the normal distribution function are the values of x av. DQG ı WKH FRQVWDQWV EH\RQG ln V can be conveniently omitted by defining a suitable reference state. In that way the following expression for the statistical entropy of the normal distribution results: ln V ln 2
I
7KH DERYH HTXDWLRQ SURYLGHV WKH ³VWDWLVWLFDO´ HQWURS\ 7R FRQYHUW LW WR WKH SK\VLFDO entropy measured in the units applying to the specific case studied a proportionality FRQVWDQWOLNH%ROW]PDQQ¶VFRQVWDQWNWREHLQWURGXFHGLQWKHQH[WVHFWLRQ LVQHHGed. In chapter 4, where we discuss the Capital Assets Pricing Model, the last equation that we obtained in this note is used.
3.3. The formalism of statistical thermodynamics. We consider a macroscopic system. It can exist in a large number of microstates states i, each characterized by its energy, E i . In total N states are consistent with the macroscopic information we have about the system. If each of the states i has a probability p i , the average value of the energy, of the system, E av. is given by: N
¦pE
E av.
i
i
(3.3)
i 1
In fact E av. is the energy E that we will most likely observe macroscopically and henceforth we will use this latter symbol to indicate this average. The problem of statistical thermodynamics is now to find the distribution of E i over the states that are consistent with the macroscopic information we have about the system (Hill (1960), Andrews (1975)). The most probable distribution is given by:
pi
e
( EE ) i Q
(3.4)
Where E is a constant independent of E i and Q is the partition function. The partition function 23
Information theory, statistical thermodynamics and value transaction theory
Q is given by: N
Q
¦e
( EE ) i
(3.5)
i 1
Note 3.5. It is easily verified that eqn. 3.5 represents a normalized distribution function, i.e. it is consistent with the notion that the sum of all probabilities must be equal to one. This socalled normalization requirement is, of course: N
¦p
1
i
i 1
If eqns. 3.4 and 3.5 are introduced, the sum at the left hand side in the equation above transforms into: N
¦e
N
¦p i 1
( EE ) i
i 1
i
N
¦e
( EE ) i
i 1
The expression above indeed complies with the normalization condition. The constant E reflects the minimum value of the energy costs involved in obtaining information beyond the macroscopic information we have about the system. We discuss this in more depth in a moment. In order to get a better feeling for the relation between the microscopic and the macroscopic descriptions, we proceed as follows. The time derivative of E follows, with reference to eqn. 3.3, as:
dE dt
d (¦ pi Ei ) dt i
(3.6)
Where E is the macroscopically observed energy, it is, as said, equal to E av. The derivative at the right hand side of eqn. 3.6 expands to result in: dE dt
¦p i
i
dEi dp ¦ Ei i dt dt i
(3.7)
Eqn. 3.7 shows that the rate of change of the macroscopic energy of a system has two contributions:
24
Information theory, statistical thermodynamics and value transaction theory
x
x
The first term at the right hand side of eqn. 3.7 represents the change of the macroscopic energy due to the fact that some kind of work is performed on the system (this may also be a change in the amount of the chemical substances SUHVHQWLQWKHV\VWHPLHLI³FKHPLFDOZRUN´LVSHUIRUPHGRQWKHV\VWHP 7KLV dE results in a rate of increase in energy of the i-th state by an amount i . dt The second term at the right hand side of eqn. 3.7 is of a totally different nature. It represents a change in the information of the observer about the probability of each of the i states; it stands for the combined effect of the changes in the probability density function pi ( Ei ) . The second term is identified as the FRQWULEXWLRQGXHWR³LQIRUPDWLRQZRUN´SHUIRUPHGRQWKHV\VWHP,WUHSUHVHQWVWKH FKDQJHLQWKHLQIRUPDWLRQZHKDYHDERXWWKHV\VWHP¶VPLFURVWDWH
From chapter 2 by combining eqns. 2.1, 2.2 and 2.3 we obtain the following equivalent formulation of eqn. 2.3, it is an equation for the rate of change of the macroscopic energy:
dE dt
) Q ) P ¦ ) i hi
(2.3)
i
Realizing that the last two terms at the right hand side of eqn. 2.3 represent the chemical and other work performed on the system the following relation is identified by comparing eqns. 2.3 and 3.7: )Q
¦E
i
i
dpi dt
(3.8)
Eqn. 3.8 shows the statistical interpretation of heat, it reflects the uncertainty about the exact microstate of the system due to our reduced information picture. It is a statistical rather than a mechanical concept. Returning to eqn. 3.4 we take the natural logarithm of both sides: ln pi
EEi ln Q
(3.9)
Combining eqns. 3.8 and 3.9 results in: )Q
¦( i
ln pi
E
ln Q dpi ) E dt
(3.10)
It is clear that the second term in brackets at the right hand side of eqn. 3.10 does not contribute to the total sum, i.e.:
¦ i
ln Q dpi E dt
0
(3.11)
This directly follows from the normalizing condition introduced in Note 3.5 (the first equation in the note) by taking the time derivative of both sides: 25
Information theory, statistical thermodynamics and value transaction theory
dpi dt
¦ i
(3.12)
0
Combining eqns. 3.10 and 3.11 results in:
¦
)Q
i
ln pi dpi E dt
(3.13)
We introduce the following equation: d ( pi ln pi ) dt
ln pi
dpi dpi dt dt
(3.14)
Combining eqns. 3.14 and 3.13 results in: )Q
d ( pi ln pi ) dpi 1 ( ) ¦ ( ) dt dt E i
(3.15)
The contribution of the second term in brackets at the right hand side of eqn. 3.15 to the sum is zero by virtue of eqn. 3.12, hence the equation reduces to: )Q
d ( pi ln pi ) 1 ( ) ¦ E i dt
(3.16)
On combination of eqn. 3.16 and eqn. 3.2 the following equality is obtained:
)Q
ln 2 dI E dt
(3.17)
Eqn. 3.17 relates the heat flow to the rate of change of the lacking information about the exact microstate of the system due to our reduced information macroscopic picture of reality. It again highlights the statistical nature of heat. As the next step we return to eqn. 2.5, the expression for the UDWH RI FKDQJH RI WKH V\VWHP¶V entropy, S : dS dt
3S )S
(2.5)
Returning to eqn. 2.7 we now consider the entropy flow to the system:
)S
)Q T
¦ ) i si (2.7) i
Considering the case in which the system exchanges only heat with the environment combination of eqns. 2.5 and 2.7 results in: 26
Information theory, statistical thermodynamics and value transaction theory
dS dt
3S
)Q
(3.18)
T
:H FRQVLGHU WKH FDVH RI D UHYHUVLEOH SURFHVV E\ ZKLFK WKH V\VWHP¶V HQWURS\ FKDQJHV LH WKH limiting case in which 3 S equals zero and eqn. 3.18 reduces to:
)Q
dS dt
(3.19)
T
Note 3.6. Choosing a reversible path along which the system evolves may seem to be a little bit artificial. However, one has to take into account that entropy is a function of state. It only depends on the state the system is in at a given moment in time. Entropy does not depend on how that state was reached. So we are free to choose a reversible path. On combination of eqns. 3.19, 3.16 and 3.2, the following expression is obtained: dS dt
1 ( ) ET
d ¦ pi ln pi i
(3.20)
dt
A fundamental result of statistical thermodynamics is that the constant E in eqn. 3.20 is given by: 1 kT
E
(3.21)
Where k is the Boltzmann constant, if expressed in (J/K), it is equal to 1.3804.10-23. Combining eqns. 3.21 and 3.20 leads to the result: dS dt
k ¦ i
d ( pi ln pi ) dt
(3.22)
Integrating both sides of eqn. 3.22 over time results in:
S
k ¦ pi ln pi C
(3.23)
i
Where C is an integration constant. We take into account that we cannot attribute a definite value to entropy, we can only define it with respect to an arbitrary reference level, hence we can, for convenience sake, arbitrarily set C equal to zero, and the following famous equation for entropy now finally results:
S
k ¦ pi ln pi
(3.24)
i
27
Information theory, statistical thermodynamics and value transaction theory
If we compare this result with eqn. 3.2 the informational nature of entropy is confirmed. Note 3.7. We briefly return to eqn. 3.21: 1 kT
E
The nature of this expression has been subject to debate in the literature (see e.g. Brillouin (1962)). The factor kT can be interpreted as the energy cost of information; at least it puts a minimum to the energy cost to obtain information beyond the macroscopic information ZHKDYH&DUHIXODQDO\VLVRIHJWKHFRQFHSWRIWKH³0D[ZHOOGHPRQ´DJHQLHWKDWZDV assumed to be able to avoid paying those costs and hence escaping the constraints due to the second law, leads to the conclusion that there exists no way to avoid paying this minimum cost. The demon has been effectively exorcised. 3.4. Towards a general value transaction theory. In the preceding section we discussed the application of a statistical microscopic approach to energy transformations in physical systems to derive the macroscopic description. This section generalizes this theory to a far broader range of systems, including industries and markets. The treatment is largely intuitive although a discussion of greater mathematical rigor and consistency is possible. The literature presents accounts of such endeavors (Grady Jr. (2008)). Here we prefer the intuitive approach because it involves less mathematical intricacy and hence better serves the purpose of this work. Consider a system with only a reduced information picture available. We investigate a feature of LQWHUHVW/HWXVLQGLFDWHWKLVIHDWXUHZLWKWKHJHQHUDOGHVFULSWRU³WUXHYDOXH´LQWKHIROORZLQJZH shall sometimes use the term value to indicate true value); its macroscopic quantity is indicated W . The information about the system only allows the specification of a probability distribution for each of the N value states i the system can exist in. The probability of value state i is p i . The probability distribution is normalized in the usual way, i.e. the sum of the probabilities for DOO DOORZDEOH VWDWHV LV RQH 7KH ³VWDWLVWLFDO HQWURS\´ RI WKH SUREDELOLW\ GLVWULEXWLRn follows, by analogy to the treatment presented in section 3.2, as:
I
K ¦ pi ln pi
(3.25)
i
Following the treatment in section 3.3 we define the average true value of W for the collection of all states i by:
¦pW
W
i
i
(3.26)
i
Where the Wi are the true values of each of the allowable microstates i . The time derivative of W follows as:
dW dt 28
¦( p
i
i
dWi dp Wi i ) (3.27) dt dt
Information theory, statistical thermodynamics and value transaction theory
Eqn. 3.27 features the two types of contributions the reader should now be familiar with. The first term at the right hand side of eqn. 3.27 is the contribution due to value adding work performed on the system. The second term is the contribution due to changes in uncertainty about the value state of the system as reflected by the change of the statistical entropy of the probability distribution. Closely following the material in section 3.3, the probability distribution is obtained by analogy to eqn. 3.4:
pi
e
E W W i (3.28)
QW
In which E W is a constant that we identify in a moment. The subscript W distinguishes it from it thermodynamic counterpart. QW is the value partition function:
¦e
QW
E W W i
(3.29)
i
Taking into account that the second term at the right hand side of eqn. 3.27 is the contribution to a change of information it can formally be written as: CI
dI dt
¦W
i
i
dpi dt
(3.30)
Where C I is the value and hence the cost of information. Following the derivation in section 3.3 that identified the statistical nature of entropy, it becomes clear that indeed I is given by eqn. 3.25, and CI
1
EW
(3.31)
Comparing eqns. 3.21 and 3.31 it becomes clear that the value of information is equivalent to kT in thermodynamics. Note 3.8. With reference to the discussion in Note 3.7 we can state that we cannot avoid paying this cost of information if we want to obtain information about the value state of the system beyond the information contained in the macroscopic description. We have to do value adding information work to obtain that additional information. In analogy with the first law of thermodynamics, we introduce the conjecture that true value W is a scarce quantity that is conserved in the transaction processes in the system; hence it changes only by exchange flows, ) W with the environment:
29
Information theory, statistical thermodynamics and value transaction theory
dW dt
)W
(3.32)
Eqn. 3.32 represents the first law of Value Transaction Theory (VTT).
Note 3.9. The first law of Value Transaction Theory states that value cannot be created or destroyed in transactions in a system. This may seem a strange statement to the reader, ZHVHHDSSDUHQW³YDOXHFUHDWLRQ´DOODURXQGXVLQRXUHFRQRPLHVDQGLQOLYLQJQDWXUH where a multitude of value bearing structures developed over the years. We reconcile this apparent conflict as follows. The ultimate source of value rests to an overwhelmingly major extent in the solar radiation that reaches the earth. The diversity of structures on earth, both in biology and the resulting human culture, has succeeded in harvesting only a minor fraction of the true value that is present in that radiation or has been created by that radiation on the basis of that constant flux of value, e.g. the value contained in the fossil resources and the production of sources of value in plants and animals. The problem is that we need the information on the basis of which we can transform this true value potential in free value that is accessible to us on the basis of our reduced information picture of reality. This is analogous to the distinction between internal energy and free energy in thermodynamics. Evolution on earth resulted in a situation in ZKLFK LW LV SRVVLEOH WR HIIHFWLYHO\ FRXSOH WR DQG GLVFRYHU DQG FUHDWH VRXUFHV RI ³IUHH YDOXH´IURPWKHWUXHYDOXHWKDWLVDYDLODEOH2XUSUHVHQWGD\HFRQRPLHVXVHOHVVWKDQ of the true value that is available in solar radiation. So a long stretch of information work still has to be covered before the solar radiation can be fully harvested in terms of free value, value that is useful to the dissipative structures in the biosphere, including human society. Following the development in section 3.3 we now introduce the second law of VTT; it provides an expression for the rate of change of the statistical entropy, I : dI dt
3I )I
(3.33)
Where 3 I is the rate of production of the statistical entropy in the transactions in the system, ) I is the rate of flow of information to the system. The second law of VTT requires the rate of production of statistical entropy to exceed or be equal to zero, i.e. in general statistical entropy is produced in the transactions taking place in the system:
3I t 0
(3.34)
Reasoning analogous to that in section 2.3 introduces the concept of free value, G :
G W CI I 30
(3.35)
Information theory, statistical thermodynamics and value transaction theory
G is the useful value to the observer in view of the statistical entropy of his reduced information picture of the system (see also Note 3.9). The following restrictions due to the combined first and second laws second law of VTT derive in complete analogy to eqns. 2.16 and 2.18:
¦R G i
d0
i
(3.36)
i
And for a system in steady state:
¦) G i
i
t 0 (3.37)
i
In these expressions Gi is the free value of asset i , Ri the net rate of transaction for asset i in the transactions in the system, and ) i the rate of transport of asset i to the system. Eqn. 3.36 states that free value can only be destroyed in the transactions in the system. Eqn. 3.37 states that for a system in steady state the flow of free value to the system needs to exceed the free value in the flows leaving it. We show the significance of these restrictions in subsequent chapters. 3.5. Conclusion. In summary we reached the following conclusions in this chapter. Eqn. 3.24 presents a statistical interpretation of entropy as it was introduced in macroscopic thermodynamics. It ties entropy to the limitations of the information about the exact microstate the system is in, due to the fact that we have only a reduced information picture in terms of the macroscopic description of the V\VWHP 7KLV OLPLWHG LQIRUPDWLRQ SLFWXUH UHVXOWV LQ WKH GLVWLQFWLRQ EHWZHHQ PDFURVFRSLF ³WUXH´ energy and free energy that was introduced in chapter 2:
G
H TS
(3.38)
This equation identifies enthalpy, H DVDPHDVXUHRIWKHV\VWHP¶VWUXHHQHUJ\ G is the energy that is available to do useful work. Combining eqn. 3.38 and the statistical interpretation of entropy given in eqn. 3.24 we arrive at the following expression:
G
H kT ¦ pi ln pi
(3.39)
i
The rationale behind eqn. 3.38 develops as follows. The fact that we have only a reduced information picture of reality results in a restriction to the useful work that can be obtained from a given quantity of potentially available energy. We pay a penalty for that lack of information; this penalty equals the product of the amount of information that is lacking, represented by the sum at the right hand side of eqn. 3.39 and the value, in energy units of that information. An in depth study of the nature of that value (see e.g. Brillouin (1962)) shows that we cannot avoid paying that penalty. Even if we use the most clever way of gathering additional information DERXWWKHH[DFWPLFURVWDWHWKHV\VWHPLVLQLQRUGHUWRIUHHXSSDUWRIWKHDGGLWLRQDO³WUXHHQHUJ\³ of the system, we have to pay an energy cost which exceeds its value or at best is equal to its value. By combining this interpretation with the second law we see that this reduced information also sets the direction which natural processes take. Natural processes proceed in the direction of 31
Information theory, statistical thermodynamics and value transaction theory
decreasing information or increasing entropy at least in isolated systems. Or alternatively phrased if we combine the system and its environment (the environment being the rest of the universe outside the system) the information will naturally decrease and entropy will increase in any possible process. To extend the picture beyond the realm of physics we developed an analogy of macroscopic thermodynamics beyond energy and generalized it to apply to every measure of value. In this way we arrived at a generalized reduced information macroscopic theory of complex real systems. We arrived at two laws, the first and second laws of VTT that govern transactions in markets and industries. The first law states that true value is a conserved quantity; transactions cannot result in the generation of true value. The second law states that transactions result in the generation of statistical entropy, free value can only be gained if it is exchanged with the environment. This results in the definition of free value as the value that can be obtained based on our reduced information picture of reality. We also indentify the cost of information; it is the price we have to pay for information beyond the information contained in our reduced information picture of reality. The chapters that follow show the application of these concepts to markets and industries.
32
Information theory, statistical thermodynamics and value transaction theory
3.6. Literature cited Andrews, F.C. (1975), Equilibrium Statistical Mechanics, Second Edition, John Wiley&Sons, New York Brillouin, L. (1962), Science and Information Theory, Second Edition, Academic Press, New York Grady Jr., W.T. (2008), Entropy and the time evolution of macroscopic systems, Chapter 3, Oxford University Press, Oxford Hill T.L. (1960), An introduction to Statistical Thermodynamics, Addison Wesley, Menlo Park (CA) Shannon, C.E., W. Weaver (1949), The Mathematical Theory of Communication, University of Illinois Press, Urbane (IL)
33
CHAPTER 4. THE CAPITAL ASSET PRICING MODEL AND VALUE TRANSACTION THEORY. 4.1. Introduction. In the theory of finance the Capital Asset Pricing Model (CAPM) is widely accepted as a decision making tool in investments subject to uncertainty. As such it is an example of a macroscopic approach that analyzes complex decision making problems in terms of a reduced information description. It is instructive to analyze the differences and correspondences between VTT, as developed in chapter 3, and CAPM. We start our analysis with a brief outline of the CAPM in the next section.
4.2. The capital asset pricing model. The Capital Asset Pricing Model is well documented in a number of texts on the subject (e.g. Sharpe (1970), Elton and Gruber (1984)). This section summarizes the main features of this model. CAPM presents an equilibrium model of markets for capital assets, for example stocks and bonds and liquid capital. It is a widely accepted simple model for these complex markets. Improvements have been suggested. However, for our purpose the standard CAPM is adequate. The development of the standard CAPM, to be termed CAPM in the remainder of this work, rests on a number of simplifying assumptions. We outline some of these here, for a more complete account the reader is referred to Elton and Gruber (1984), chapter 11: x There are no transaction costs, i.e. there are no costs associated with the mere trading of the assets. x No personal income tax rests on the transactions of the actors in the market. x Buying and selling actions of individual buyers and sellers do not affect the price of an asset; the total community of investors determines the pricing of assets. This assumes perfect competition. x 7KHGHFLVLRQPDNLQJRIEX\HUVDQGVHOOHUVVROHO\GHULYHVIURPWKHDVVHW¶VH[SHFWHGYDOXe and the uncertainty about the expected value. This reminds of some features of VTT discussed in chapter 3. x The uncertainty associated with the returns follows from the probability density function of expected value characterized by the normal distribution function (see chapter 3, note 3.3). Furthermore, uncertainty is presumed given by the standard deviation of the normal distribution function x Investors prefer more value to less value and less uncertainty to more uncertainty. x Investors can lend and borrow unlimitedly at a rate equal to the risk free return as defined later on.
34
The capital asset pricing model and value transaction theory
pi
xi xav. Fig. 4.1. The normal probability distribution function. Under these assumptions CAPM shows (Elton and Gruber (1984), chapter 11) that all investors will end up with portfolios of assets that lay along the so-called capital market line. This line characterizes efficient portfolios with an in the eyes of the collective investors optimal tradeoff between expected return and risk. Indeed, the expected future returns are used as a measure for present day value in the eyes of the investors. They are, however, not paying the expected present value of the future returns, as they want to correct for the uncertainty about the future returns as measured by the standard deviation of these returns. This means that the collective investors have an attitude in which they do not want to pay expected value, but rather expected value corrected for uncertainty. This shows analogy with the distinction between true value and free value as discussed in chapter 3, and also with the distinction between energy and free energy in thermodynamics. Elton and Gruber (1984) show that the mathematical representation of the capital market line is the following:
Rav.
Rf
Rm,av. R f
Vm
V
(4.1)
In this equation Rav. is the expected return on the asset portfolio of the investor, it is equivalent the value of the portfolio. Rm,av is the expected return on the so-called market portfolio; this is the return on all assets in the market weighted according to their presence in the market. It is the weighted total opportunity set of all assets. Rf is the return on the risk free asset; this is an investment opportunity with a zero uncertainty of return. In real life this could be approximated by government bonds. ɐm is the standard deviation of the returns on the market portfolio. ɐ is the standard deviation of the returns on the portfolio of the investor. The macroscopic relation describing this complex problem is very simple; it represents one of the fine examples of the power of reduced information macroscopic descriptions. 35
The capital asset pricing model and value transaction theory
Eqn. 4.1 is graphically depicted in fig. 4.2.
Rm,av.
Rf
Vm
Fig. 4.2. The capital market line for an efficient portfolio of assets. Y-axis expected return, x-axis standard deviation of returns. It is instructive to study the structure of eqn. 4.1 in somewhat detail. We develop the following verbal representation:
expected return = risk free return + (value of risk) x (amount of risk)
This expression shows the expected return, or the required return, to equal the return that can be obtained risk free increased by a compensation for the risk associated with the investment. The value of this required compensation is equal to the value, or the price, of risk in the market multiplied by the amount of risk expressed in terms of the standard deviation of the value of the assets. This contribution is due to the uncertainty contained in the reduced information picture WKH LQYHVWRU KDV RIWKH DVVHWV¶ IXWXUH YDOXH7R DUULYH DW HTQ.1 we presumed the probability distribution of the expected return to follow a normal distribution and that the standard deviation SUHVHQWVDQDGHTXDWHUHSUHVHQWDWLRQRIWKHLQYHVWRUV¶DSSUHFLDWLRQRIULVN Note 4.1. Before proceeding we want to discuss the nature of the risk free return. The risk free return is assumed to be the return on assets in which no uncertainty exists about the future returns. This is a difficult concept as this involves complete certainty about future returns and given the complexity of capital markets it seems difficult to be completely certain. Even if we take government bonds as a proxy of a risk free asset some irreducible uncertainty will remain. Therefore we prefer to talk of the least risky asset in the market. In our VTT formalism this is easily accommodated if we assume that the least risky asset still has a positive statistical entropy. We have to remember that we cannot 36
The capital asset pricing model and value transaction theory
assign an absolute number to entropy, we can only define it with reference to an arbitrary datum level and we can then define our complication away by choosing the statistical entropy of the least risky asset as that datum level. Before embarking on a further discussion of the salient features of eqn. 4.1 it is useful to slightly rearrange it. As a first step we rework eqn. 4.1 to: Rav. R f
V Vm
Rm,av. R f
(4.2)
Eqn. 4.2 is equivalent to: Rav. R f Rm,av. Rm,av. Rm,av. R f
V Vm
(4.3)
This reduces to: Rav. Rm,av. Rm,av. R f
V Vm Vm
(4.4)
This is reformulated to: Rav. Rm,av.
C I ,CAPM 'V
(4.5)
Eqn. 4.6 introduces the value or, in a market in equilibrium, the cost of information, C I ,CAPM . It is given by:
C I ,CAPM
Rm,av. R f
Vm
(4.6)
Furthermore, 'V V V m , the difference between the standard deviation of the portfolio and that of the market portfolio is introduced. In words eqn. 4.5 states that according to the CAPM formalism higher uncertainty about future returns requires a risk premium that is proportional to the increase in the risk perceived by the investor multiplied by the costs defined by eqn. 4.6. Introduction of this concept of costs, eqn. 4.6, in eqn. 4.1 results in: R av. R f
C I ,CAPM V
(4.7)
We will use the results obtained from CAPM so far to compare the CAPM formalism with the general Value Transaction Theory introduced in chapter 3, section 3.4. In section 3.4 we introduced a distinction between value or true value and free value, the value corrected for the uncertainty contained in the distribution function of the return on the portfolio of assets. That correction factor equals the product of the cost of information and the statistical HQWURS\RIWKHV\VWHP¶VGHVFULSWLRQLQWHUPVRIDUHGXFHGLQIRUPDWLRQPDFURVFRSLFDSSURDFKLQ words: 37
The capital asset pricing model and value transaction theory
excess free value = value (and costs) of information x difference in statistical entropy This expression bears a striking resemblance to both eqn. 4.7 and 4.5 which result from the CAPM formalism. The difference rests in the definition of statistical entropy. By assumption the statistical entropy is expressed in terms of the standard deviation in the CAPM formalism and the relevant difference in statistical entropy is expressed as the difference in standard deviation. The VTT formalism expresses the statistical entropy in terms of the following expression:
I
K ¦ pi ln pi
(4.8)
i
This reveals a significant difference between the two approaches. In VTT we do not need to assume a normal distribution nor do we have to assume the standard deviation as a measure of perceived uncertainty. The VTT formalism leads us directly to the characteristics of the distribution function to be substituted in eqn. 4.8. Before further analyzing the difference between the CAPM and the VTT formalism we will introduce a more general approach to the CAPM. An equation is presented expressing the expected return on an asset RUDSRUWIROLRRIDVVHWVLQWHUPVRIWKHDVVHW¶VVR-FDOOHGȕQRWWREH FRQIXVHG ZLWK WKH ȕ XVHG LQ FKDSWHU )RU D IXOO GLVFXVVLRQ DQG GHULYDWLRQ RI WKLV H[WHQGHG formalism the reader is referred to Elton and Gruber (1984). Returning to eqn. 4.1 the following modified version of this equation is introduced: Rav.
R f ( Rm,av. R f )
V Vm
(4.9)
For the more rigorous approach to the CAPM eqn. 4.9 is generalized to:
Rav.
Rf
Rm,av. R f V im
Vm
Vm
(4.10)
Where V im is the covariance of asset i with the market portfolio. This leads to a common formulation of the equation for the capital market line in the CAPM formalism: Rav.
R f E i ( Rm,av. R f )
(4.11)
Where E i LVLQGLFDWHGDVWKH³EHWD´RIWKH i -th asset or portfolio of assets. The beta of the portfolio is given by:
Ei
V im V m2
(4.12)
,QZRUGVWKHLQWURGXFWLRQRIWKHȕ-value takes into account that the variance in the return of each asset has two contributions: an asset specific part and a part which contributes to the standard deviation of the market portfolio. It is possible to diversify away the asset specific part by investing in a sufficiently broad portfolio of assets and this asset specific part does not contribute 38
The capital asset pricing model and value transaction theory
to the return expectation of investors. The contribution to the variance of the market portfolio, measured by the covariance between the market portfolio and the asset, is the part that contributes to the return expectation. For the case of uncorrelated return expectations of the asset and the market portfolio the expression presented in eqn. 4.9 is recovered. The material presented above summarizes the assumptions and the basic results of CAPM. In words these can be summarized as follows. Investors tend to have the same perspective about the risk of investing in a particular asset (a single asset or a portfolio of assets). This perspective is based on the probability distribution of the value of those assets, based on the return on those assets, in the past. This distribution is assumed to follow a normal probability distribution and the risk relates to the standard deviation of that normal distribution. There exists a market portfolio which contains a weighted fraction of all assets available in the market. The contribution of an asset to the standard deviation is given by the covariance between the asset under consideration and the market portfolio, weighted according to the weighting of the asset in the market portfolio. The product of this covariance and the costs of uncertainty in the market portfolio sets the expected excess return over the return on the risk free asset. In the next section we compare the CAPM formalism with Value Transaction Theory. 4.3. A comparison of CAPM and VTT. According to VTT free value rather than intrinsic (true) value is a meaningful expression of the value that is available to the investor based on this reduced information picture of complex macroscopic systems. In chapter 3 we introduced the following relation between free value and value: Wi C I I i
Gi
(4.13)
Where Gi is the free value of the asset, Wi is the value or true value of the asset and I i is the statistical entropy associated with the probability distribution of value. As discussed in the preceding chapter the statistical entropy is given by:
Ii
K ¦ pi ln pi
(4.14)
i
We develop the following argument. Investing an amount of risk free capital in the same amount of expected value of assets characterized by a probability distribution of value, proceeds in the direction of decreasing free value according to eqn. 4.13. According to VTT this is a transaction which is perfectly possible as free value decreases in such transaction and it is according to the second law of VTT possible because the statistical entropy increases in such a process. However, this is different for the reverse process as this would be uphill in terms of free value. This leads to a situation in which an investor will not invest risk free assets in a risk bearing asset of the same true value. Indeed he will at least require a risk premium equal to the second term at the right hand side of eqn. 4.13. In case he obtains exactly that risk premium the free value change in the transaction is zero. This represents an equilibrium transaction in the VTT formalism. Hence, the expected return on the risk bearing portfolio as a proxy of its expected value follows according to VTT from the following equation: Rav.
R f CI I i
(4.15)
The structure of eqn. 4.15 is equivalent to that of eqn. 4.9 if V is used as a proxy for the 39
The capital asset pricing model and value transaction theory
statistical entropy I i of the risk bearing portfolio. We now extend the reasoning above to the situation of a transaction involving two risk bearing assets and we assume that one of the assets is the market portfolio. In that case we write eqn. 4.15 for the equilibrium transaction as: Rav. Rm,av.
C I 'I i
(4.16)
Where 'I i is the difference in statistical entropy between the two portfolios. The discussion now proceeds by comparing eqns. 4.16 and 4.5. It is clear that these equations become equivalent if 'V is used as a proxy for 'I and if C I is given by eqn. 4.6. The CAPM and VTT formalism lead to equations that are structurally equivalent. They are, however, not the same. The salient differences are analyzed as follows. If we assume that VTT presents an adequate representation of capital markets eqns. 4.16 and 4.14 lead to the following expression: Rav. Rm,av.
KC I '( pi ln pi )
(4.17)
If VTT applies this is the expression for the excess return the investor requires. We stress that this equation follows without any assumption about the nature of the probability distribution function of value, whilst eqn. 4.5, which describes the CAPM result, presumes a normal distribution function: Rav. Rm,av.
C I ,CAPM 'V
(4.18)
Returning to the case of the normal probability distribution and revisiting the derivation presented in Note 3.3 the following result follows under the VTT formalism and assuming a normal distribution function: 'I i
' ln V ln 2
(4.19)
If we substitute eqn. 4.19 in 4.16 it follows: Rav. Rm,av.
CI ' ln V ln 2
(4.20)
Rav. Rm,av.
CI V ln ln 2 V m
(4.21)
,QVHUWLQJıDQG V m results in:
This can be compared to a slightly modified version of eqn. 4.18: Rav. Rm,av.
40
( Rm,av. R f )(
V 1) Vm
(4.22)
The capital asset pricing model and value transaction theory
Eqns. 4.21 and 4.22 differ for the general case. The VTT and CAPM lead to different results. To analyze the difference between eqn. 4.22 and 4.21 in more detail the case of small excursion DURXQG WKH PDUNHW SRUWIROLR LQ WHUPV RI ı LV FRQVLGHUHG )RU that case the following approximation to eqn. 4.21 applies: Rav. Rm,av. |
CI 'V V m ln 2
(4.23)
(a)
market portfolio
Expected Return
Risk free limit
(b)
Relative Risk Fig. 4.3. Comparison of CAPM (a) and VTT (b) We reach the conclusion that, for small excursions of risk around the market portfolio, CAPM and VTT lead to expressions that are mathematically equivalent. For both large and small risks the return required if the CAPM formalism applies would be larger compared to the case where VTT applies. Fig. 4.3 illustrates this. Note 4.2. The derivation of eqn. 4.23 from 4.21 proceeds as follows. We start from eqn. 4.21: Rav. Rm,av.
CI V ln ln 2 V m
This equation is reformulated to: Rav. Rm,av.
C I V m 'V ln ln 2 Vm
Where 'V is the size of the excursion around the market portfolio. The last equation transforms to:
41
The capital asset pricing model and value transaction theory
Rav. Rm,av.
CI 'V ln(1 ) ln 2 Vm
We introduce the following approximation for small x : ln(1 x) | x
Combining the last two equations leads to: Rav. Rm,av. |
CI
V m ln 2
'V
We recover eqn. 4.23. The author speculates that in many instances it will be difficult to discriminate between the CAPM and the VTT approach. We now introduce data from the literature (Elton and Gruber (1984), chapter 13, Sharpe and Cooper (1972)). The relevant data are summarized in Table 4.1; the last column was added by the author. /LQHDUUHJUHVVLRQZDVSHUIRUPHGRQERWKUHWXUQYHUVXVȕ&$30 DQGUHWXUQYHUVXV ln E (VTT). The results of the linear regression in terms of the fraction of total variance explained by the linear regression equations are given in table 4.2. Both equations provide a good correlation. The residual variance 1 U 2 of the VTT approach is lower than that for CAPM. The difference is significant at the 92% confidence leYHO,QWKHDXWKRU¶VRSLQLRQWKHUHVXOWVRIWKLVVLQJOHWHVWDUH insufficient to reach a conclusion about a preference for one of the models but we cannot refute the VTT model on the basis of the empirical evidence presented.
Portfolio E 1.42 1.18 1.14 1.24 1.06 0.98 1.00 0.76 0.65 0.58
Table 4.1. Some literature data on risk versus return Average Return ln E 22.7 0.35 20.5 0.17 20.3 0.13 21.8 0.22 18.5 0.06 19.1 -0.02 18.9 0.00 15.0 -0.27 14.6 -0.43 11.6 -0.54
Table 4.2. Linear regressions according to CAPM and VTT CAPM regression VTT regression Fraction of variance explained by the model 0,9513 0,9710 42
The capital asset pricing model and value transaction theory
(U2) We need to make some additional remarks on the application of models of financial markets. It involves an important methodological and philosophical problem. A model is a tool to describe a situation of more or less involved complexity. It is used to predict features of reality beyond the data on which the model is based in the first place. A very basic assumption or requirement is that the model and the system under consideration are independent, i.e. the existence of the model is not supposed to influence the processes that take place in the system. This may be too stringent an assumption if a model is used of a system in which human actors are present that have knowledge of the model. Certainly in a situation where the financial community accepts the predictions of the model, it may influence the outcome of the transaction between the players in the market. The author leaves it to the reader to judge the significance of this remark that impacts on all models of financial and other socioeconomic interactions. 4.4. Conclusion. Considering the material presented in this chapter the following line of reasoning seems reasonable. The CAPM and the VTT formalism both lead to the conclusion that both risk and true value determine what an investor will pay for an asset or a portfolio of assets. An investor requires an excess return, or excess expected value, on an investment that is subject to uncertainty about the returns in the future. The CAPM formalism presumes the normal distribution to give an adequate representation of the probability distribution of the future returns and quantifies risk in terms of the standard deviation of that distribution. VTT needs no assumptions about the probability distribution and quantifies risk in terms of the statistical entropy of the distribution function. In both approaches there will be a value (or cost) of risk; it is associated with the risk premium for the portfolio of all assets in the market, the market portfolio. The distinction between CAPM and VTT lies in the measure of risk. In the general approach to CAPM the risk of an asset is measurHGE\DVVHWȕ. It is equal to the contribution of the asset to the YDULDQFHRIWKHPDUNHWSRUWIROLRH[SUHVVHGDVWKHFRYDULDQFHEHWZHHQWKHDVVHW¶VDQGWKHPDUNHW SRUWIROLR¶VQRUPDO GLVWULEXWLRQIXQFWLRQV,Q977WKHULVNSUHPLXPRQDQDVVHWdepends on the contribution of the asset to the total statistical entropy of the market portfolio. This increases the confidence in the power of VTT to predict aspects of a wide range of phenomena including capital markets. In the VTT formalism a fundamental driving force for natural transactions has been identified, it is related to a change in statistical entropy. In the next chapters the application of this notion will be extended to transactions beyond those involving only financial assets.
43
The capital asset pricing model and value transaction theory
4.5. Literature cited Elton E.J., M.J. Gruber (1984), Modern Portfolio Theory and Investment Analysis, 2nd Edition, John Wiley & Sons, New York Sharpe, W.F. (1970), Portfolio Theory and Capital Markets, Mc Graw-Hill, New York Sharpe, W.F., G.M. Cooper (1972), Risk-Return Classes of New York Stock Exchange Common Stocks 1931-1967, Financial Analysts Journal 28, p.46-p.57
44
CHAPTER 5. THE TRANSFORMATION OF RISK INTO VALUE. 5.1. Introduction. Before embarking on the subject of the transformation of risk into value we start recapping the first and second laws of Value Transaction Theory (VTT) as discussed in section 3.4. The laws derive from the observation that a macroscopic extensive quantity changes by two types of processes: exchange with the environment and production in the transactions taking place in the system. We consider two quantities value W , and statistical entropy I . The first law states that value is conserved; its production in transactions is zero. This leads to the following mathematical expression for the rate of change of value: dW dt
)W
(5.1)
By comparing thermodynamics and VTT an expression for ) W , the flow of value to the system, results:
)W
) I ) P ¦ ) i wi
(5.2)
i
In eqn. 5.2 ) I LVWKHIORZRIYDOXHIUHHULVNJHQHUDOL]HG³KHDW´ WRWKHV\VWHP ) P is the flow to WKHV\VWHPRIULVNIUHHYDOXHJHQHUDOL]HG³ZRUN´ DQGWKHODVWWHUPDWWKHULJKWKDnd side stands for the true value of the assets flowing to and from the system, ) i being their rate of flow to the system and wi the true value of one unit asset i. The second law of VTT leads to an equation for the rate of change of the statistical entropy of the system dI ) (5.3) 3 I ¦ )i Ii I dt CI i
I i is the statistical entropy of asset i. The second law requires that the production of statistical entropy 3 I must be equal to or exceed zero, i.e. transactions produce statistical entropy. We complete the formalism by reminding the reader of the expression for free value, g i it is defined as: gi
wi C I I i
(5.4)
Where the second term at the right hand side reflects that information is not a free commodity but comes at a price of C I units of value per unit information, it is the equivalent of absolute temperature (or more precisely kT , the product of Bolt]PDQQ¶V FRQVWDQW DQG DEVROXWH temperature) in thermodynamics. This brief summary serves as a primer for the discussion of some features of VTT.
45
The transformation of risk into value
Note 5.1. The nature of the second law of VTT may become clear if we consider the following. Assume that we initially have an amount of information about a system that leaves a lack RI LQIRUPDWLRQ DERXW WKH V\VWHP¶V GHWDLOHG PLFURVWDWH JLYHQ E\ WKH VWDWLVWLFDO HQWURS\ After a period of time our information will have become less relevant as the system has changed. This means that if we do no further research on the system, i.e. perform information work, the statistical entropy increases. A positive production of statistical entropy results if the system is left alone. 7KHJHQHUDOL]HG³&DUQRW&\FOH´ Chapter 2, section 2.4 features a discussion of the so-called Carnot Cycle or Carnot Engine. It proves a valuable concept in understanding the limitations to the transformation of heat into useful work. Fig. 5.1 shows a representation of the generalized Carnot Engine. We refer to the thermodynamic equivalent that was the subject of section 2.4. Note 5.2. It may be useful to extend a little on the nature of the Carnot Cycle, particularly for readers less familiar with thermodynamics. Those skilled in that art can skip this note. The French engineer Sadi Carnot (Carnot (1824)) studied the heat engine, an invention that triggered the industrial revolution. A heat engine is a device in which a heat flow from a high to a low temperature is coupled to the generation of a motive force. It can be used to drive a train. In daily life we observe that heat naturally flows from a high to a low temperature, that is why hot coffee succeeds in cooling. If a flow of heat is cleverly used it can produce a motive force as in a heat engine. In the thermodynamic formalism the difference, also called the gradient in temperature creates a positive force. Actually 1 . A the force is equal to the difference in the reciprocal of absolute temperature, T positive force sets the natural direction of a process. We will return to the coupling process that uses a positive force to generate another force in this chapter. We have identified the cost of information as the VTT equivalent of temperature in thermodynamics. Following the heat engine analogy we thus should expect that natural economic transactions proceed from high cost of information to low cost of information environments. This feature will be analyzed further in this chapter.
) I1 , CI1
)P
Generalized ³&DUQRW´(QJLQH
) I 2 , CI 2 Fig. 5.17KHJHQHUDOL]HG³&DUQRW´(QJLQHIRUWKHWUDQVIRUPDWLon of statistical entropy into value. 46
The transformation of risk into value
The engine exchanges flows of information, ) I 1 and ) I 2 , with environments at two levels of the cost on information, C I 1 and C I 2 This process generates a flow of risk free value, ) P The system operates in such a way that its state variables, W and I , are constant. Applying the first law of VTT according to eqns. 5.1 and 5.2 results in:
) I1 ) I 2 ) P
0
(5.5)
0
(5.6)
The second law of VTT, eqn. 5.3, results in:
3I
) I1 ) I 2 CI1 CI 2
Note 5.3. The rationale behind eqn. 5.6 may not directly be clear. In thermodynamics the entropy 1 change associated with the flow of one unit heat is . For generalized heat or T 1 , considering the information this leads to a change of statistical entropy of CI equivalency of temperature and cost of information.
The combination of eqns. 5.5 and 5.6 leads to the following expression for ) P , (the reader should remember that due to the sign conventions for the flows, positive towards the system, this is the value adding work performed on the environment):
)P
) I 1 (1
CI 2 ) 3 I CI 2 CI1
(5.7)
Eqn. 5.7 shows that a positive flow of risk free value can result from the exchange of information between two levels of cost of information. A requirement is that the right hand side of eqn. 5.7 is positive. As the production of statistical entropy 3 I is positive or zero by virtue of the second law of VTT a necessary but not sufficient condition for a positive flow of risk free value is:
) I 1 (1
CI 2 )!0 C I1
(5.8)
This can only apply if:
CI 2 1 C I1
(5.9)
The inequality 5.9 implies that a necessary condition for the generation of a positive flow of risk free value is that the cost of information of the second flow is lower than that associated with the 47
The transformation of risk into value
first flow. Note the equivalency with the thermodynamic Carnot Engine where it is required that heat flows from a high to a low temperature. Following the reasoning in section 2.4 we now realize that the maximum flow of risk free value will result if 3 I reaches the lower limit of zero: ) P ,max .
) I 1 (1
CI 2 ) CI1
(5.10)
And the following expression for the maximum efficiency of such a process,K max . results:
K max .
(1
CI 2 ) C I1
(5.11)
In chapter 4 we showed that, for financial markets, the cost of information is related to the risk premium on the market portfolio in the CAPM formalism. Hence, the discussion presented above does result in the following strategy for obtaining risk free value from movements in the risk premium in the market. A high cost of information is equivalent to a high risk premium, in this situation an investor could invest in assets bearing a relatively high risk. If he disinvests in a market with a lower cost of information useful value adding work, i.e. risk free money can be gained. Playing this game against the movements of the market would result in the creation of value for the investor. There are problems associated with this strategy. The first one is how the investor judges whether the risk premium in the market will go up or go down, this involves predicting the future and by the very limitation of his reduced information he can never be sure. He can of course rely on the past experience and estimate whether the risk premium in the market is more likely to go up or down. Even if he overcomes this problem, there remains another one. If we assume that all investors avail of the same information, their estimate whether the market risk premium will go up or down will be the same. Hence, they will all prefer either low or high risk assets and by the very workings of the market the preferred assets will show a relative increase in market price and hence a decreased return will be the result. The conclusion is that no strategy to extract value from transactions in the market exists if all players have the same information set. Of course the strategy could work if a limited population of all investors has more information about the market than others do. These so-called asymmetries in information can lead to a situation where a limited group of investors can extract value from the market at the expense of the less informed ones. We return to these asymmetries in a while. In this work we emphasize the positive way in which informational asymmetries lead to the creation of economic value and economic growth. Of course, asymmetries in information can also lead to less desirable situations and the present line of thinking in economic theory seems to emphasize these negative connotations. The book of Douma and Schreuder (2008) provides several examples of this. Here we will mainly use the positive aspect; informational asymmetries as source of the force that drives economic growth. 5.3. An elementary free value transducer: The concept of price. In this section we treat value transactions according to VTT using the system presented in Fig. 5.2. An investor invests in a risk bearing asset that is characterized by a cost of information C I ,
48
The transformation of risk into value
a true value W0 and statistical entropy I 0 . The investor pays a price P0 . A flow of value free ULVNJHQHUDOL]HG³KHDW´ ) I to the environment is identified Application of the first law of VTT leads to: dW dt
P0 W0 ) I
(5.12)
)I
CI , I 0 ,W0
Investor, W, I.
P0
Fig. 5.2. A transaction involving a risk bearing asset
The second law results in:
dI dt
I0
)I 3I CI
(5.13)
Assuming that statistical entropy I , characterizing the investor, does not change eqn. 5.13 leads to the conclusion: )I
CI (I 0 3 I )
(5.14)
Combining eqns. 5.14 and 5.12 results in: dW dt
P0 W0 C I ( I 0 3 I )
(5.15)
Introducing the definition of free value it follows: G0
W0 C I I 0
(5.16)
Combining eqns. 5.16 and 5.15 results in:
49
The transformation of risk into value
dW dt
G0 P0 C I 3 I
(5.17)
Considering eqn. 5.17 the following argument develops. The seller of the asset at least expects a price given by the right hand side of eqn. 5.16, the true value corrected for the cost of risk multiplied by the amount of risk associated with the asset. If it is assumed that both the cost of risk and the amount of risk are the same for the buyer and the seller, a transaction will only take place if the price P0 HTXDOVWKHDVVHW¶VIUHHYDOXH G0 . Inserting this result in eqn. 5.17 leads to: dW dt
C I 3 I
(5.18)
Eqn. 5.18 leads to a straightforward conclusion. If buyer and seller have the same perception of amount of risk and costs of information a transaction can at best lead to a zero change of the amount of value of the investor. This applies in a transaction where 3 I reaches the minimum of zero allowed by the second law. Generally the investor looses value if the production of statistical entropy in the transaction is positive. This could be due to the transaction costs involved. The result corresponds to the conclusion in the preceding section where it was shown that a gradient in cost of information is needed to achieve a gain of free value. The discussion in this chapter shows that both a change in cost of information and a change in the appreciation of the risk associated with the asset can lead to a gain or loss of free value. The problem is that the information set which is available to all players in the market does not allow a prediction whether the product of cost of information or the amount of risk will go up or go down. So in a situation of an efficient market, where all players have the same information and use the information in an optimal way, the average gain or loss of free value for the collective investors will be zero if there is no production of statistical entropy. Before concluding on this chapter we analyze a slightly more complex situation. 5.4. A generalized market transaction.
) I , CI1
) i , wi , I i1, C I 1
Investor, W, I.
) i , Pi , I i 2, C I 1
Fig. 5.3. A generalized market transaction. Fig. 5.3 represents a more involved market transaction. An investor, characterized by value, W statistical entropy, I and cost of information C I 1 , acquires a number of risk bearing assets. The rate of flow to the investor is ) i , true value per unit wi and statistical entropy per unit, I i1 . The investor pays a price Pi per unit i in risk free value. The cost of information of the seller is also
C I 1 , his amount of statistical entropy with respect to asset i is I i 2 . Finally, value free risk is 50
The transformation of risk into value
exchanged with the environment at a rate ) I . The first law of value transaction leads to:
dW dt
¦) w i
i
i
) I ¦ ) i Pi
(5.19)
i
The seller expects a price for asset i that is equal to his perception of free value: wi C I 1 I i 2
Pi
(5.20)
Furthermore, the second law leads to the expression:
dI dt
)I ¦ ) i I i1 3 I CI1 i
(5.21)
Assuming that the statistical entropy of the investor does not change it follows:
¦) C
)I
i
I C I 13 I
(5.22)
I 1 i1
i
Combining eqns. 5.19, 5.20 and 5.22 results in:
dW dt
¦) C i
I1
( I i 2 I i1 ) C I 13 I
(5.23)
i
For convenience sake we consider the limiting case of zero production of statistical entropy and eqn. 5.23 reduces to:
dW dt
¦) C i
I1
( I i 2 I i1 )
(5.24)
i
A positive increase of the value for the investor is possible only if the sum at the right hand side of eqn. 5.24 is positive. This is possible if asymmetries in information between the investor and the seller exist. An investor indeed can gain value if the summed statistical entropy of the assets is lower for the investor than for the sellers. We stress that this only applies to the sum, it is not possible to specify a restriction to the contribution of the individual assets i. Further simplifying the discussion we study the case of only one asset being involved in a transaction, in that case it follows: dW dt
)1C I 1 ( I 12 I 11 )
(5.25)
For a single asset asymmetries in statistical entropy can result in transactions that are beneficial to the investor. These asymmetries are in fact the force that drives transaction in a nonequilibrium situation where gradients in statistical entropy occur. 51
The transformation of risk into value
5.5. Conclusion. It is instructive to compare the result obtained in the preceding section with the classical microeconomic model of perfect competition (e.g. Barney and Ouchi (1986)). This theory and its limitations have been extensively discussed in the literature (e.g. Baumol et al. (1982)). We also summarized the features of the model in chapter 1 but for the readers convenience we will present a somewhat extended summary again. The assumptions underlying the theory of perfect competition are (Chamberlin (1933), Williamson (1975)): -The market has a large number of buyers and sellers. -Low entry and exit barriers; the limitations to leaving and entering the market are negligible from a cost perspective. -All producers and consumers have the same information and they use that information in the most effective way. -The products in the market are homogeneous, i.e. the buyers have equal appreciation for the products offered by the sellers. -There are no transaction costs. These assumptions lead to a market with the following characteristics: -Supply and demand are balanced. -The selleUV LQ WKH PDUNHW HDUQ RQO\ D ³QRUPDO´ UHWXUQ SUREDEO\ WKLV PHDQV WKDW WKH players only earn the risk free return. -The situation is socially optimal; it will lead to an optimal allocation of resources in the economy (Hirshleifer (1976)). The second result of the perfect competition model agrees with the results of the analysis based on VTT: In an equilibrium situation prices in the market are at best equal to free value and on the average no player can gain risk free value by the transactions. In addition we show that this critically depends on the information of the various players. In case of asymmetries in information over average returns are possible. Asymmetries in information are assumed away in the dull world of perfect competition. It is clear that the perfect competition model as well as the VTT formalism applied to an equilibrium situation leads to severe discrepancies between the theory and some observations regarding real life situations: -The theories offer no explanation for the existence of the firm (Coase (1937)). The theories lead to the conclusion that firms should not exist; markets are a more effective mechanism to optimally arrange transactions. -The observation of heterogeneity of the products and services offered in the market and differences in organization of the firms that contest the market. This in apparent conflict
52
The transformation of risk into value
with the existence of a unique optimal equilibrium to which products, services and organizations converge. -The fact some firms earn over average returns and prosper and grow, whilst others disappear. A good reflection of the problems with efficient markets and equilibrium theory is the way in which Jensen (1972) defines an efficient market. A market is efficient with respect to a given set of information if none of the players can earn a profit if they have access to the same information and optimally use it. This agrees with the information based VTT approach. If the information set of all players is equal, no creation of risk free value by transactions is possible. VTT, however, also shows that over average returns are possible if asymmetries in information exist. Note 5.4. Mathematically asymmetries in information can be explained from the nature of the probability distribution of value. The probability distribution relevant to an actor depends on his information about the state the system is in. Let us call the information set of the investor I inv. whilst that of the other actors in the market is I act. . The probability distribution P(Wi | I inv. ) is the distribution relevant to the investor. P(Wi | I act. ) is the one relevant to the other actors. If these distributions differ, asymmetries in information exist that lead to differences in statistical entropy and /or costs of information and hence differences in free value. Fortunately, the VTT formalism has, in addition to the remark already made about asymmetries in information, a lot more to offer in removing the limitations of the perfect competition and the equilibrium approaches to markets. Clearly an equilibrium approach is not adequate to describe situations in markets in which firms and other actors compete. Such markets are generally not in equilibrium and firms and other organizations are both the product and the source of this disequilibrium. In a situation away from equilibrium, gradients in information and costs of information are created, maintained and destroyed. The underlying asymmetries in information lead to a situation in which over average returns can be obtained. In the general non-equilibrium case firms, markets and industries constantly evolve under the pressure of competition between the actors that contest the sources of free value. Based on our VTT approach we can also identify the force that drives market transactions and the appearance of industries and markets. In analogy with thermodynamics the force that drives transactions is a gradient in the ratio of the free value to cost of information. In this way we define the forces X i that drive transactions as:
Xi
'(
gi ) CI
(5.26)
These results pave the way for the introduction of equations for the rate of the transactions in the market and the value that can be generated per unit time by the market transactions. In this context the linear free value transducer will be discussed in the next chapter.
53
The transformation of risk into value
5.6. Literature cited. Barney J.B., W.G. Ouchi (1986), Organizational Economics, Jossey-Bass, San Francisco Baumol W.J., J.C. Panzar, R.P. Willig (1982), Contestable Markets and the Theory of Industry Structure, Harcourt Brace Janovich, New York Carnot S. (1824), Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance, Bachelier, Paris (French) Chamberlin E.H. (1933), A theory of monopolistic competition, Harvard University Press, Cambridge MA Coase R.H. (1937), The Nature of the Firm, Economica New Series 4(16), 385-405 Douma S., H. Schreuder (2008), Economic Approaches to Organizations, Fourth Edition, Pearson Education, Harlow (UK) Hirshleifer J. (1976), Price theory and applications, Prentice Hall, Englewood Cliffs, New Jersey Jensen M.C. (1972), Capital Markets: Theory and Evidence, Bell Journal of Economics and Management Science, 3(2), 357-398 Williamson O.E. (1975), Markets and Hierarchies: Analysis and Antitrust Implications, Free Press, New York
54
CHAPTER 6. THE LINEAR FREE VALUE TRANSDUCER. 6.1. Introduction. The preceding chapters developed and started discussing the basic formalism of Value Transaction Theory (VTT). We showed its derivation from accepted principles in macroscopic theory and statistical thermodynamics. The treatment highlighted the information based nature of entropy and statistical entropy as a reflection of our reduced information picture of the microscopic richness of almost all systems of interest. We introduced the first and second laws of thermodynamics and VTT and showed their application to a number of systems in physics (thermodynamics) and finance (Capital Asset Pricing Model). This resulted in a number of restrictions to transactions. These principles proved useful in the understanding of value transactions. The next step is to move beyond equilibrium systems. We will analyze systems that are not in equilibrium but, are, due to the fact that they exchange value at at least two levels of IUHHYDOXHZLWKWKHHQYLURQPHQWHJYDOXHIUHHULVNJHQHUDOL]HG³KHDW´ at two levels of costs of information (JHQHUDOL]HG³WHPSHUDWXUH´ RUDVVHWVRIGLIIHULQJIUHHYDOXH,QGLVFXVVLQJWKHVHQRQequilibrium situations again the physical analogue, the theory of macroscopic irreversible thermodynamics will be extended to a broader value definition than the physical concept of energy. The reader is referred to the literature for a more in depth discussion of the pertinent thermodynamic theories (Prigogine (1967), Haase (1969), Glansdorff and Prigogine (1971), Roels (1983)). In WKLV FKDSWHU WKH GLVFXVVLRQ IRFXVHV RQ WKH ³OLQHDU´ UHJLRQ ZKHUH ZH VWD\ UHODWLYHO\ FORVH WR HTXLOLEULXP 7KH UHDGHU PD\ ZRQGHU ZKHWKHU WKH WHUP ³UHODWLYHO\ FORVH´ LV sufficiently specific to warrant a well founded treatment. Indeed we will show in subsequent chapters that there exists a well defined threshold where the linear region ends and the non-linear region begins. In the non-linear region a vast richness of organized structures may appear and generally does appear. 6.2. Production of statistical entropy: The concept of rates and forces. Our discussion starts by returning to chapter 2, or to be more precise section 2.3 Note 2.3, it highlighted that for a system in which processes take place the entropy production follows as: 3S
gi
¦ r '( T ) i
(6.1)
i
Analyzing the structure of eqn. 6.1 shows that the entropy production is expressed as the product of the rate of the i-th chemical reaction ri , and a force, the difference in the ratio of free energy to g temperature between the reactants and the products of the i-th process, '( i ) . In chapter 5 we T generalized the force to the concept of value. For that general case the following expression for the production of statistical entropy applies:
3I
gi
¦ r '( C i
i
)
(6.2)
I
In this equation the ri are now the rates of transaction in the system.
55
The linear free value transducer
For convenience sake we write:
3I
¦r X i
i
(6.3)
i
In this equation the X i are the forces. Eqn. 6.3 provides a general expression for the production of statistical entropy in a pattern of processes or transactions taking place in a system. There may be other forces for example a 1 gradient in costs of information, '( ) (see section 5.2). CI We logically assume that the rates of transactions depend on the magnitude of the forces, i.e. we can formally write: ri
f ( X 1 ............... X i .................. X n )
(6.4)
Eqn. 6.4 introduces the concept that every rate depends on all the relevant forces and not only on the force associated with the particular transaction, the so-called conjugate force. At equilibrium the production of statistical entropy becomes zero by the fact that both the rates and the forces become simultaneously zero. Note 6.1. The reader should be aware of the fact that only the macroscopic rates of the processes go to zero, at the microscopic level fluctuations take place and the processes continue to SURFHHGERWK³XSKLOO´DQG³GRZQKLOO´7KHIDFWWKDWWKHPDFURVFRSLFUDWHVDSSHDU]HUR is caused by the averaging process inherent in a macroscopic approach. This is also known as the principle of detailed balance. As we already stressed the second law of VTT results in a restriction due to the fact that the production of statistical entropy must exceed zero or be equal to zero. If only one process contributes to the production of statistical entropy the conclusion is straightforward, that process must go downhill in the direction of decreasing free value. As indicated in Note 2.3 this does not apply if more than one process takes place. It is not necessary for all reactions or transactions to proceed downhill. Uphill reactions are possible if the result of all the processes taking place in the system concurs with the second law. This phenomenon which is called coupling is very important to the understanding of quite a number of phenomena in the world around us. Note 6.2. The coupling of uphill and downhill processes is frequently observed in daily experience. Consider the example of an electrical power plant operated on the basis of a dam in a river flowing downhill. The water flowing through the dam drives an electrical generator and electrons are driven up a gradient in voltage. This process results in useful energy and is the result of the coupling of the power generator to the natural process of the water flowing downhill. The later process is the positive force that drives electrons uphill against the gradient in voltage.
56
The linear free value transducer
6.3. The linear phenomenological equations. In the preceding section we identified rates and forces that contribute to the production of statistical entropy in a system that is not in equilibrium. In this section we formulate explicit expressions for the relation between the rates and the forces. It is quite natural to assume a relation between the rates and the forces. The forces drive the corresponding process. The formulation of these rate equations is the subject of linear irreversible thermodynamics. It led to the introduction of the so-called linear rate equations, the phenomenological equations. These apply if the system is not too far from equilibrium. This development resulted from the famous work of Onsager (Onsager (1931a, 1931b). Said equations take the form:
rj
¦L
ij
Xi
(6.5)
i
In eqn. 6.5 the Lij are the so-called phenomenological coefficients, these are constants that do not depend on the forces. Eqn. 6.5 introduces several interesting concepts. We illustrate this for the simple case where only two forces exist in the system under consideration. In that case the equations take the form:
J1
L11 X 1 L12 X 2
(6.6)
J2
L21 X 1 L22 X 2
(6.7)
To stay close to the treatment and symbols normally used in thermodynamics of irreversible processes we now introduce the symbol J i rather than ri for the rates. The eqns. 6.6 and 6.7 VKRZWKDWLQGHHGLQDGGLWLRQWRDSURFHVVEHLQJGULYHQE\LWV³RZQ´IRUFHLWVFRQMXJDWHIRUFHLWLV also influenced by the second, non conjugate, force. The same is true vice versa. All forces thus drive, at least in principle, all processes in the system and this is exactly the mechanism that allows uphill processes to be driven by downhill processes. One further result pertaining to the phenomenological constants has been obtained in thermodynamics. These are the reciprocity relations that are a consequence of coupling close to equilibrium:
L12
L21
(6.8)
If eqn. 6.8 is substituted in eqn. 6.7 the final set of phenomenological equations results:
J1 J2
L11 X 1 L12 X 2
(6.9)
L12 X 1 L22 X 2
(6.10)
Eqn. 6.8 states that the coefficient of coupling of rate one to force two must be equal to that of the coupling of rate two to force one. Note 6.3. The rationale behind the reciprocity relations can be understood using the analogy of friction. If two surfaces are rubbed against each other they both experience the same friction force on the surface where they meet. If the surfaces are in movement, their rate 57
The linear free value transducer
of movement would be influenced by the same friction force. Thus the reciprocity relations follow from the observation that both surfaces experience the same friction force. In this section we formulated rate equations for the dependence of the rates of the transactions in a system on the forces that drive these transactions. Next we revisit the second law of VTT to formulate restrictions to the values of the phenomenological coefficients. Subsequently, we introduce the concept of the linear free value transducer and study some of its properties. 6.4. The second law of value transaction revisited. If eqns. 6.3, 6.9 and 6.10 are combined the following equation for the production of statistical entropy follows: 3I
L11 X 12 2L12 X 1 X 2 L22 X 22
(6.11)
The second law of VTT leads to the restriction:
3I t 0
(6.12)
Combining eqns. 6.11 and 6.12 results in: L11 X 12 2L12 X 1 X 2 L22 X 22 t 0
(6.13)
It can be easily shown (see e.g. Roels (1983)) that the following restrictions to the phenomenological coefficients result (we have dropped the equality signs as the system is not in equilibrium):
L11 ! 0
(6.14)
L22 ! 0
(6.15)
L12 L11L22
(6.16)
Eqns. 6.14 and 6.15 imply that the phenomenological coefficients describing the coupling of a rate to its conjugate force must necessarily be positive, i.e. a positive force drives its conjugate rate in a positive direction. Eqn. 6.16, which poses a restriction to the coupling of a rate to the non conjugate force, is somewhat more involved. It states that the coefficients of coupling can never exceed an upper limit. This restriction makes intuitive sense, the coefficient describing cross-coupling can never be so large that the downhill process comes to a halt or is driven uphill. The inequality 6.16 can be used to define a so-called degree of coupling q that can never exceed unity: q
58
L12 L11L22
(6.17)
The linear free value transducer
6.5. The linear free value transducer. Consider the system depicted in fig. 6.1.
J1
Free Value
X1
X2
Transducer
J2
Fig. 6.1. A linear free value transducer A system is shown that is in a steady state and acts as a free value transducer (Roels (1983)). Input in the transducer is a source of free value which drives a transaction characterized by a decrease of free value, creating a force X 1 . The rate of flow is J 1 . The output of the free value transducer is an uphill process characterized by a, negative, force X 2 and a rate J 2 . The following phenomenological equations describe the transducer:
J1
L11 X 1 L12 X 2
(6.18)
J2
L12 X 1 L22 X 2
(6.19)
Apart from the degree of coupling, q defined in the preceding section, we also introduce the socalled phenomenological stoichiometry, Z:
L22 L11
Z
(6.20)
Furthermore, we introduce the force ratio x given by:
x
Z
X2 X1
(6.21)
As X 2 is negative and X 1 is positive the force ratio is a negative quantity. It is constrained between 0 and -1. 59
The linear free value transducer
Introducing q , Z and x in eqns. 6.18 and 6.19 results in:
J1
L11 X 1 (1 qx)
(6.22)
J2
ZL11 X 1 ( x q)
(6.23)
Eqns. 6.22 and 6.23 can be used to calculate the ratio of the rate of the transaction at the output of the transducer to the rate of transaction at the input:
J2 J1
Z
xq 1 qx
(6.24)
Eqn. 6.24 provides a relation between the output of the free value transducer and the input in the downhill process. The output represents the creation of free value for an investor based on the opportunity created by the gradient of free value in the environment that provides the input. The properties of eqn. 6.24 are more fully analyzed below. The present author (Roels (1983)) applied the concept of the linear free value transducer to oxidative phosphorylation in biological structures, microorganisms. In oxidative phosphorylation organisms derive a positive flow of free energy from the oxidation of an energy source such as glucose. Coupled to the oxidation process is the phosphorylation of adenosine diphosphate (ADP) to adenosine triphosphate (ATP). ATP is the currency of living systems; it can be used to drive processes of growth and product formation in living cells. It can also be used to maintain the integrity of living cells. Thus oxidative phosphorylation is a prime example of the derivation RIXVHIXO³YDOXH´LQWKLVFDVHIUHHHQHUJ\IURPWKHRSSRUWXQLW\RIDJUDGLHQWRIIUHHHQHUJ\LQWKH environment provided by the availability of a resource such as the sugar glucose. The linear free value transducer can also be used to approximate transactions in the market of stocks and bonds where a free value gradient can be the result from a difference in cost of information between the investor and the other actors in the market or a difference in statistical entropy between investor and the other actors. In both cases asymmetries of information between the investor and other actors in the market are the source of an opportunity to harvest free value. Systems that create asymmetries in information realize an opportunity to harvest free value and can exploit that opportunity to gain risk free value to grow their activities or to grow and maintain the source of their advantage. Such systems move away from equilibrium and exhibit lower statistical entropy than would be the case if they would evolve towards equilibrium in absence of coupling to the source of free value in the environment. In his book What is Life? (Schrödinger (1945)) the Nobel laureate Erwin Schrödinger explains the evolution of ordered structures, more specifically organisms in biology, as a result of their ability to avoid degradation to thermodynamic equilibrium by coupling to what he calls sources of negentropy (or alternatively free energy) in the environment. We discuss this more exhaustively in the next chapters. Here we postulate that actors in markets may drift away from equilibrium by the creation of and coupling to a gradient in free value by exploiting their superior position in terms of statistical entropy and/or costs of information. The linear free value transducer is the first elementary approach to such situations. To finish this section the properties of eqn. 6.24 will now be studied in somewhat more detail. To follow earlier work of the present author we will adopt a slightly adapted notation and rewrite eqn. 6.24 in the following form:
60
The linear free value transducer
J2 ZJ1
xq 1 qx
(6.25)
At the left hand side of eqn. 6.25 the normalized operational stoichiometry is introduced by division of the ratio of the ingoing and outgoing flows by the phenomenological stoichiometry Z . In fig. 6.2 the normalized operational stoichiometry is plotted against the force ratio x with the degree of coupling, q as a parameter.
Fig. 6.2. A plot of the normalized operational stoichiometry against the force ratio x with the degree of coupling q as a parameter. Fig. 6.2 shows that with the exception of the limiting case of a degree of coupling 1, the operational stoichiometry differs from the phenomenological stoichiometry, i.e. the normalized stoichiometry is lower than 1. This is due to imperfect coupling in the general case. This is in agreement with the well documented fact that the ratio of phosphorylation of ATP to the rate of respiration due to the oxidation of glucose in oxidative phosphorylation, the so-called P/O ratio, i.e. the operational stoichiometry of oxidative phosphorylation, generally differs from the theoretical value of 3 (Roels (1983)). For a low absolute value of the force ratio the operational stoichiometry stays close to the phenomenological stoichiometry if the degree of coupling in not too low. If the loading of the free value transducer increases, i.e. if the absolute value of the force ratio increases, the operational stoichiometry decreases even if the degree of coupling stays close to unity. There also exists a value of the force ratio where the operational stoichiometry becomes zero, i.e. the uphill reaction comes to a halt. This happens if the absolute value of x equals q . At this point the downhill reaction is still running at a non-zero rate. This rate, to be indicated by J 1m , is given by: J 1m
L11 X 1 (1 q 2 )
(6.26)
This introduces the concept of the so-called maintenance dissipation in the theory of the linear free value transducer. It is the analogue of the concept of maintenance energy in microbiology 61
The linear free value transducer
(Roels (1983)); even in absence of growth a microorganism needs a constant flow of free energy to maintain the integrity of its non-equilibrium structure. We return to this concept in section 6.6. We now scrutinize the properties of the linear free value transducer somewhat further. An interesting property to study is the power output P at the uphill side of the transducer; it represents the gain in value per unit time. It could be the profit gained per unit time by a firm that couples to the input force. It is given by: L11 X 12 x(q x) (6.27)
P
Note 6.4. Eqn. 6.27 is derived as follows. The force at the exit of the free value transducer, X 2 is negative, hence the power output at the uphill side is:
J 2 X 2
P
Combining this equation with eqn. 6.23 results in:
P
ZL11 X 1 X 2 (q x)
From the definition of the force ratio x , eqn. 6.21, it follows: X2
xX 1 Z
Combining the last two equations results in: P
L11 X 12 x(q x)
By differentiation of eqn. 6.27 with respect to x and setting this differential equal to zero, the q optimum value of the force ratio at which power output is maximized, xopt. shows equal to . 2 The optimum power output Popt. is: Popt.
1 L11 X 12 q 2 4
(6.28)
Note 6.5. The derivation of eqn. 6.28 proceeds as follows. Differentiating eqn. 6.27 with respect to x results in:
dP dx
L11 X 12 (q x x)
Putting the right hand side equal to zero and solving for x results in: 62
The linear free value transducer
x
q 2
Substitution of this result in eqn. 6.27 leads to eqn. 6.28. Secondly, we study the efficiency of the free value transducer KW . The efficiency is defined as the ratio of the output of useful value, i.e. the product of the flow and the corresponding force at the output side of the transducer, divided by the product of the flow and the force at the input side:
KW
J2 X 2 J1 X 1
(6.29)
Substituting the quantities defined earlier it follows:
KW
x(q x) 1 qx
(6.30)
The discussion above showed that if the free value transducer is optimized towards maximum q power output the optimum value of the force ratio has to equal , if this is substituted in eqn. 2 6.29 the efficiency at maximum power output, K PW is obtained:
K PW
q2 4(1 0.5q 2 )
(6.31)
This efficiency is always lower than 0.5, a value reached in the case of perfect coupling, i.e. q=1. If the degree of coupling decreases the efficiency at maximum power output also decreases, e.g. 0.34 at a degree of coupling of 0.9 and a mere 0.07 at a degree of coupling of 0.5. Two conclusions are apparent from this exercise. If the desire is to extract a maximum flow of free value from a free value gradient in the environment efficiency has to be sacrificed, the maximum efficiency being 50% in the case of perfect coupling. To retain a meaningful efficiency the degree of coupling has to be high as the efficiency decreases quickly with decreasing degree of coupling. If we optimize the expression in eqn. 6.30 for maximum efficiency by differentiation to x and setting the result equal to zero, the following equations for xopt. and K opt. follow:
xopt.
K opt.
1 q2 1 q q2 (1 1 q 2 ) 2
(6.32)
(6.33)
63
The linear free value transducer
Note 6.6. The derivation of eqns. 6.32 and 6.33 proceeds as follows: From eqn. 6.30 we obtain by differentiation to x :
dKW dx
(q 2 x)(1 qx) x(q x)q (1 qx) 2
Putting the right hand side equal to zero and some rearrangement leads to: qx 2 2 x q
0
Solving the quadratic equation leads to two solutions:
x1
2 4 4q 2 2q
x2
2 4 4q 2 2q
The second solution leads to a value of x 1 , this is outside the range of physically meaningful values. Hence, the only possible solution is obtained after elementary rearrangement:
x
1 q2 1 q
Eqn. 6.32 results. If we substitute the optimum value for x in eqn.6.30 we get:
K opt.
( 1 q 2 1) / q)(( 1 q 2 1) / q q) 1 q2
We now multiply the nominator and the denominator of this equation by 1 q 2 1 :
K opt.
( 1 q 2 1)( 1 q 2 1) / q)(( 1 q 2 1) / q q) ( 1 q 2 1) 1 q 2
We take into account the following equality: ( 1 q 2 1)( 1 q 2 1)
64
q 2
The linear free value transducer
Substituting this in the equation for K opt. results in:
K opt.
q(( 1 q 2 1) / q q) ( 1 q 2 1) 1 q 2
If we again multiply the nominator and the denominator by 1 q 2 1:
q( 1 q 2 1)(( 1 q 2 1) / q q)
K opt.
( 1 q 2 1) 2 1 q 2
We use the following equality again: ( 1 q 2 1)( 1 q 2 1)
q 2
Substituting this in the equation for K opt. results in:
K opt.
q(q q( 1 q 2 1)) ( 1 q 2 1) 2 1 q 2
This can be straightforwardly reworked into:
K opt.
q2 ( 1 q 2 1) 2
We finally arrive at eqn. 6.33. In addition using eqn. 6.28 the power output at the optimum efficiency, Popt.,eff . , can be calculated:
Popt.,eff .
L11 X 12 ( xopt. )(q xopt. )
(6.34)
65
The linear free value transducer
Fig. 6.3. Normalized power output and thermodynamic efficiency as a function of the degree of coupling for a free value transducer optimized towards maximum efficiency. In fig. 6.3 the results obtained directly above are illustrated. It shows a plot of the normalized power output in a free value transducer optimized towards maximum efficiency and a plot of the maximum efficiency both against the degree of coupling of the free value transducer. The following picture emerges. The efficiency increases sharply and monotonously with increasing degree of coupling. However, the normalized power output increases with increasing degree of coupling up to a value of roughly 0.9, next it decreases sharply and it becomes zero if we strive for an efficiency of 1.0 reached at a degree of coupling of 1. To obtain a reasonable power output and a reasonable efficiency the degree of coupling has to be between 0.9 and 0.95 corresponding to efficiency 0.6 to 0.65. This is appreciably higher than the efficiency of 0.5 which is the maximum for a free value transducer optimized towards maximum power output. This discussion highlights the following features of the linear free value transducer. If the free value transducer is operated in an environment where there is abundant opportunity to harvest a gradient in free value the best approach is to optimize towards maximum power output and hence to sacrifice efficiency, which will be maximum 50%. In an environment where those gradients are scarcer, efficiency becomes the name of the game. We can realize an efficiency of 60-65 % if we sacrifice 30 to 40 % of the power output. The significance of these remarks will be analyzed further in one of the following chapters. 6.6. The linear free value transducer and the concept of maintenance dissipation. In the preceding section in the discussion leading to eqn. 6.26 we noted that the linear free value transducer continues to use free value even if the output power has become zero unless the degree of coupling is one. To discuss this further eqn. 6.26 is recapped: J 1m
L11 X 1 (1 q 2 )
(6.26)
This concept is a well documented feature of living systems such as microorganism. Here it will be generalized to apply to all organized structures including socioeconomic systems such as industrial corporations. The meaning of maintenance dissipation can be made clear as follows. Organizations are structures with lower statistical entropy than the one corresponding to equilibrium. To maintain such a structure against the forces of the second law of VTT, a constant 66
The linear free value transducer
flow of free value to the system is needed to compensate for the non-zero statistical entropy production in the transactions and processes that take place in the system. Apparently the maintenance trap can be avoided by going to values of the degree of coupling that are close to unity. However, as was shown in the preceding section this conflicts with a trade of between efficiency and meaningful power output. It is now interesting to speculate where the borderline lies between organized structures that need maintenance dissipation and structures that are stable in absence of such dissipation processes. Let us consider a pound of sugar sitting in a bag. If properly kept, it can be maintained for an indefinite period of time without any expense of maintenance energy. However, it most certainly is a non-equilibrium structure. It has originated in sugar canes, or if you like sugar beets, that transform carbon dioxide and water into sugar. In doing this they take advantage of the flow of free energy that reaches the earth in the form of solar radiation. Plants are organized structures that use the free energy in solar radiation in a process called photosynthesis; they have developed the machinery (or alternatively they have developed the information) to effectively couple to a source of free energy (solar radiation) to drive a mixture of carbon dioxide and water uphill to sugar. Why does sugar not need free energy to maintain its non-equilibrium structure? The answer lies in the processes to which it is exposed in its environment. The pound of sugar escapes revealing its non-equilibrium characteristics by avoiding an environment in which processes take place that allow its destruction. This becomes immediately clear if the sugar is dissolved in water and left alone in an ambient environment. After a while the solution becomes infected with micro-organisms that have developed the ability (have developed the information) to degrade sugar to fuel the multiplication and maintenance of their non-equilibrium structure. 6.7. Conclusion. In the foregoing we developed a linear model to define the kinetics of the processes that are involved in free value transactions. It was shown that in systems that can exchange free value with the environment at at least two levels of free value, processes or transactions can and will develop that allow the coupling of such a gradient in free value to processes in which a useful flow of free value is created. In this process the system develops in the direction of decreasing statistical entropy or increasing free value. In fact such systems can be both the product and the source of gradients in free value. Gradients in free value can originate from various sources. One obvious one is differences in true value. A less obvious one is of an informational nature, there can exist differences in information, asymmetries in information, between the various actors in the environment. These can lead to differences in statistical entropy and differences in the cost of information. Indeed these gradients in information are the driving force behind transformations and transactions in a wide variety of systems. It was shown that it is possible to characterize these transactions by various measures of efficiency and efficacy and the extent of the coupling and the magnitude of the forces was shown to have a profound effect on these measures. It was also discovered that non-equilibrium structures will generally need dissipation of free value even if no net production of free value takes place. This is a direct consequence of the fact that they are non-equilibrium structures and are as such subject to the production of statistical entropy by virtue of the second law of VTT. Let us remind ourselves of the conditions under which the linear free value transducer can provide a picture of reality: -There exist gradients of free value in the environment. -The relations between the rates or the flows of the transactions and transformations and the forces follow the linear phenomenological equations. 67
The linear free value transducer
-The phenomenological constants are true constants, independent of the forces. -The reciprocity relations for the dependence of the flows on the non-conjugate force apply. -The non conjugate phenomenological constants are non-zero, i.e. there exists a more or less effective coupling between flows and their non conjugate forces. It was not necessary to specify the processes underlying the coupling; it suffices to assume that coupling exists. Under these conditions steady state systems may evolve and such non-equilibrium structures are stable-at least in the strictly linear region- as long as the forces which led to their creation exist. We analyze this matter in more detail in the next two chapters, where some of the conditions indicated above will be relaxed.
68
The linear free value transducer
6.8. Literature cited. Glansdorff, P., I. Prigogine, (1971), Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley Interscience, New York Haase, R. (1969), Thermodynamics of Irreversible Processes, Addison±Wesley, Reading (MA) Onsager, L. (1931a), Phys. Rev. 37 (4), 405, Onsager, L. (1931b), Phys. Rev.38 (12), 2265 Prigogine, I. (1967), Introduction to Thermodynamics of Irreversible Processes, Third Edition, Wiley Interscience, New York Roels, J.A. (1983), Energetics and Kinetics in Biotechnology (Chapter 5) Elsevier Biomedical Press, Amsterdam Schrödinger, E. (1945), What is Life? , Cambridge University Press, New York
69
CHAPTER 7. DISSIPATIVE STRUCTURES: THE STABILITY OF STEADY STATES. 7.1. Introduction. The preceding chapter introduced the theory of the linear free value transducer. We showed that it exhibits features that resemble those of forms of organized matter and broader some of the organizations that exist in our society. The treatment introduced the postulate that organizations and organized forms of matter, such as organisms, are examples of dissipative structures that feed on the opportunity to create a flow of free value to maintain their structure and to grow and develop. A number of authors, e.g. Schrödinger (1945) and also Prigogine and co-workers (Glansdorff and Prigogine (1971), Prigogine (1980), Prigogine and Stengers (1984)) arrived at these conclusions. It was Prigogine who coined the term dissipative structures. However, a significant hurdle still needs to be overcome. In this chapter the approach of Prigogine is followed to illustrate the problem. Firstly, we reemphasize that the formation of complex ordered structures that evolve, is incompatible with equilibrium (at equilibrium the statistical entropy reaches a maximum). More significantly these are also incompatible with non-equilibrium states that are close to equilibrium (in the following chaSWHU WKH QRWLRQ ³FORVH WR HTXLOLEULXP´ LV quantified). In the range of conditions where the linear phenomenological rate equations hold (i.e. the equations governing the linear free value transducer), formation of dissipative structures takes place but complex form of organized matter or organizations and markets will never evolve. This sheds some doubt on the usefulness of the linear model of the free value transducer. The author hopes to provide some light in this matter in this chapter and the next chapter. The first step, in the next section, is to investigate the stability of non-equilibrium steady states (states that, on the macroscopic level, do no longer change in time) in the strictly linear region. 7.2. The stability of non-equilibrium steady states. The subject of our analysis is a system in which two forces operate to drive two transactions according to linear phenomenological laws:
J1
L11 X 1 L12 X 2
(7.1)
J2
L21 X 1 L22 X 2
(7.2)
We assume that the reciprocity relation holds and the system of equations transforms into:
J1
J2
L11 X 1 L12 X 2
L12 X 1 L22 X 2
(7.3) (7.4)
Initially the system is in equilibrium. At the beginning of the evolution of the system forces become operative that drive the system away from equilibrium. The production of statistical entropy in the system follows as:
3I
J1 X 1 J 2 X 2
(7.5)
We assume that one of the forces, say X 1 , is kept constant whilst the other is allowed to vary. The derivative of the production of statistical entropy to the force X 2 follows as: 70
Dissipative structures: The stability of steady states
d3 I dX 2
X1
dJ 1 dJ J2 X 2 2 dX 2 dX 2
(7.6)
By combining eqn. 7.6 with the phenomenological rate equations given by eqns. 7.3 and 7.4 the following expression follows:
d3 I dX 2
2 L12 X 1 2 L22 X 2
(7.7)
A steady state is reached when the second flow vanishes, i.e. when the right hand side of the phenomenological equation eqn. 7.4, vanishes. In that case the right hand side of eqn. 7.7 equals zero. This observation implies that the derivative of the entropy production becomes zero and hence we conclude that in a steady state the production of statistical entropy reaches an extremum. By taking the second derivative of the entropy production with respect to X 2 it follows:
d 23 I dX 22
2L22
(7.8)
In section 6.4, eqn.6.15 we observed that the phenomenological constant L22 is positive, hence the second derivative in eqn. 7.8 shows to be positive. This allows us to conclude that the extremum of the entropy production in a steady state is a minimum. In this way the theorem of minimum entropy production in a steady state is obtained. In words the results obtained in this section can be phrased as follows. In a system in which two forces driving two conjugate flows are operative the system will evolve towards equilibrium if no constraints are applied to the forces. At equilibrium the flows and the forces become simultaneously zero and no macroscopic transformation or transaction activity is finally observed. If one of the forces is constrained to a non-zero level, e.g. by coupling of the conjugate flow to a source of free value in the HQYLURQPHQW WKH SURGXFWLRQRI VWDWLVWLFDO HQWURS\ ´VHHNV´ WKH QH[W EHVW DOWHUQDWLYHWR UHDFKLQJ zero at equilibrium. It realizes the minimum production of statistical entropy that is consistent with the constraints that are applied to the system. It is important to realize that several assumptions were necessary to reach this conclusion. These were already discussed in the concluding section of chapter 6, but are so important that they will be reiterated here. Firstly, the linear phenomenological equations should apply, this shows to be a stringent restriction. Secondly, the phenomenological coefficients should be true constant and not be dependent on the forces. Otherwise eqn. 7.7 cannot be obtained. Furthermore, we also assumed that the reciprocity relation holds. Finally the phenomenological coefficient for the dependence of the flows on the non-conjugate forces should exceed zero. These are severe restrictions as we show later on. Note 7.1. 7R LQFUHDVH WKH UHDGHU¶V XQGHUVWDQGLQJ RI WKH VHYHULW\ RI WKH UHVWULFWLRQV IRUPXODWHG above a simple example is analyzed. Consider heat transport according to the Fourier equation:
71
Dissipative structures: The stability of steady states
)Q
O
dT dx
Fig. 7.1 shows the system described by the Fourier equation. A piece of metal is insulated at both sides. The lower end is in contact with a heat reservoir at temperature T1 , the top is in contact with a reservoir at a temperature T2 lower than T1 . After initial transients a dT develops. The flow of heat ) Q follows the Fourier stable gradient in temperature dx equation with O being the conductivity coefficient.
Fig. 7.1. Heat conduction according to the Fourier equation. The Fourier equation is now compared to the thermodynamic approach in terms of a linear phenomenological equation. The force for the flow of heat is a gradient in the reciprocal of the temperature and the corresponding linear phenomenological equation would read:
)Q
1 d( ) T L11 dx
This is reworked into: )Q
L11 dT T 2 dx
In case O is a true constant independent of T, the phenomenological coefficient would be given by: 72
Dissipative structures: The stability of steady states
L11 OT 2 ,I )RXULHU¶V ODZ KROGV WKH SKHQRPHQRORJLFDO FRHIILFLHQW LV IDU IURP LQGHSHQGHQW RI LWV conjugate force. This illustrates the limitations of the thermodynamic formalism: The steady state would not correspond to minimum entropy production. What still needs to be done is to investigate the stability of the steady states in the linear region. Again a system is considered that has been aged to reach a steady state subject to a constrained force X 1 . Assume that a small fluctuation in the steady state value of the force X 2 , 'X 2 , takes place. This results in a change of the conjugated flow J 2 given by:
'J 2
L22'X 2
(7.9)
By virtue of the fact that L22 is a positive constant a positive change in the force will result in an increase of the conjugate flow and the reverse is true for a negative change in the force. The tendency is that the fluctuation is reversed by the corresponding change in the flow. Hence, the steady state is intrinsically stable. A further evolution of the system beyond the steady state never takes place. This line of reasoning shows that in the realm where the linear phenomenological relations hold, non-equilibrium steady states are intrinsically stable. Considering the objective of this work to extend the theory to include complex systems in biology and our human society this conclusion cast doubts on the relevance of the approach followed so far. Biological evolution has led to a great variety of complex organisms including Homo sapiens, our species. In addition we see in our society a very complex pattern of organization phenomena ranging from language to industrial corporations. Everywhere we see sustained evolution, in apparent conflict with equilibrium but also with the stability of steady states if we remain in the linear region. Fortunately developments in thermodynamics in the last decades extended the formalism beyond the linear range; these are the subject of non-linear non-equilibrium thermodynamics. The extension of this formalism to the systems we want to study is the subject of the subsequent chapters. 7.3. Evolution and the linear free value transducer. In this chapter we have so far discussed some of the limitations of an approach based on the linear free value transducer. If we stay in the linear region sustained evolution leading to increasingly complex structures does not take place as steady states in the strictly linear region are intrinsically stable. Still the linear free value transducer proves a useful model to get some insight in the phenomena that are observed in markets and industries. To do this we return to the free value transducer optimized to maximum power output as it was discussed in chapter 6. Eqn. 6.28 provides an expression for the power output, Popt. of this optimized transducer: Popt.
1 L11 X 12 q 2 4
(6.28)
We use eqn. 6.28 to show that a basic feature of industry evolution, the concept of the life cycle of an industry as discussed in chapter 1, is obtained using a very simple model. A firm has identified an opportunity in the market from which it wants to extract free value, i.e. 73
Dissipative structures: The stability of steady states
profit, by coupling to that gradient of free value. The degree of coupling is initially zero, but the firm develops, in a learning by doing way, an increasingly sophisticated set of captive information that allows it to increasingly effectively couple to the free value opportunity. Assume that the following equation describes the learning by doing process: dq dt
k (1 q) (7.10)
The equation simply states that the growth of q is proportional to the room of improvement in the degree of coupling that still exists. Solving eqn. 7.10 results in: q 1 e kt
(7.11)
Combining eqns. 6.28 and 7.11 results in: Popt.
1 L11 X 12 (1 e kt ) 2 4
(7.12)
This equation is graphically depicted in fig.7.2, and indeed the model results in the life cycle characteristics as described in chapter 1.
Fig. 7.2. Life cycle characteristics according to an elementary free value approach. The y-axis shows 4 Popt. /( L11 X 12 ) , the x-axis kt.
7.4. Conclusion. In section 7.2 the limitations of dissipative structures that are engaged in value transactions in the linear near equilibrium range were discussed. Although there are severe limitations as far as the description of organized structures and organizations is concerned, these systems still are useful in the understanding of such more complex structures in a qualitative and maybe even partly quantitative way. A basic postulate of value transaction theory is that value tends to flow in the GLUHFWLRQRIGHFUHDVLQJIUHHYDOXH,Q³QDWXUDO´SURFHVVHVIUHHYDOXHGHFUHDVHVDQGRUVWDWLVWLFDO entropy increases and the accessibility of the value to a limited information actor continuously 74
Dissipative structures: The stability of steady states
decreases until equilibrium is reached and transformations and transaction are not observed in a macroscopic picture. Under certain conditions, if an actor succeeds in connecting to a source of free value in the environment, or if he creates such a source of free value by e.g. exploiting a source of information that is captive to the actor, the actor can drive value uphill against a negative free value gradient and thus he creates free value out of statistical entropy. The natural direction set by the second law of VTT is locally reversed. This has been interpreted as the FUHDWLRQRI³2UGHUDWRI&KDRV´WRcite the title of the book of Prigogine and Stengers (1984). In coupling to the source of free value, negentropy as it has been termed by Schrödinger (1945), the statistical entropy gradually decreases until a steady state is reached and the further evolution of the system comes to a halt. In such a near equilibrium steady state the entropy production becomes the minimum that is consistent whit the constraints on the forces that prevent the system from evolving towards equilibrium. Furthermore, the theory predicts as said that, once established, the steady state is stable, as long as the coupling to the source of free value remains in place. Now the time has come to leave the strictly linear range and to proceed towards states further from equilibrium in the next chapters.
75
Dissipative structures: The stability of steady states
7.5. Literature cited. Glansdorff, P. and I. Prigogine (1971), Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley Interscience, New York Prigogine I. (1980), From Being to Becoming, W.H. Freeman, San Francisco Prigogine, I. and I. Stengers (1984), Order out of Chaos, Bantam Books, Toronto Schrödinger, E. (1945), What is Life? Cambridge University Press, New York
76
CHAPTER 8. THE NON-LINEAR FREE VALUE TRANSDUCER: SUSTAINED EVOLUTION. 8.1. Introduction. This chapter extends the basic theory of free value transduction to generally cover nonequilibrium situations, i.e. we relax the limitation to the strictly linear region beyond equilibrium. Non-linear relations between flows and forces are both a characteristic and a source of such nonequilibrium developments, beyond the strictly linear region. The remaining chapters of this work focus on further developing the theory and an analysis of practical consequences of the theory if it is applied to human society, more particularly industries and markets. Chapters 6 and 7 analyzed the linear range of value transduction. We showed that, even in the linear range, interesting, generally still macroscopically unobservable, structures appear with a statistical entropy lower than that corresponding to equilibrium. These structures create and thrive on a source of free value and produce statistical entropy at a non-zero rate, such systems are ³GLVVLSDWLYHVWUXFWXUHV´&RPSOH[RUGHUHGVWUXFWXUHVFDQQRWevolve in the strictly linear region as the near equilibrium steady states are intrinsically stable. Whenever a source of free value is available in the environment systems may and will appear that couple to that source of free value to fuel their maintenance and growth. The available sources of free value can and will be tapped WR ³RUJDQL]H´ D SUHYLRXVO\ OHVV RUJDQL]HG VLWXDWLRQ :H PHQWLRQHG WKH 1REHO ODXUHDWH Schrödinger (1945) who speculated that living systems, with activities in apparent conflict with the second law, are the thermodynamic analogue of the free value transducers described here. 2UJDQLVPVDUHGLVVLSDWLYHVWUXFWXUHVWKDW³IHHGRQQHJHQWURS\´$VVXFKKHSUREDEO\ZDVWKHILUVW to bridge the intellectual gap between biological evolution and the second law. We show now that this gap can be further closed, by the extension of the formalism to the non-linear nonequilibrium region. The work of Prigogine and co-workers is acknowledged and closely followed (Glansdorff and Prigogine (1971), Nicolis and Prigogine (1977), Prigogine (1980), Prigogine and Stengers (1984)). In the linear realm a steady state finally emerges and the macroscopic properties of the state do not appear to change any more. This is a feature the dissipative structures have in common with equilibrium structures. However, contrary to the situation at equilibrium, where the production of statistical entropy vanishes, the systems continue to produce statistical entropy, fuelled by sources of free value in the environment. In the linear range and when the Onsager phenomenological equations and the reciprocity relations hold the steady state structure is stable against fluctuations and it is able to produce value. This contrary to the situation at thermodynamic equilibrium where value adding work is impossible. In a steady state in the strictly linear range the production of statistical entropy UHDFKHVDPLQLPXP7KLVUHPLQGVRI³VRFLDORSWLPDOLW\´WKHGLVVLSDWLRQRIIUHHYDOXHUHDFKHVWKH minimum possible in view of the constraints that the system experiences. This reminds of results of the theory of perfect competition in classical economic theory (Hirshleifer (1976)). We revisit this analogy later on. In the linear region the dissipative structures that appear do not show complex macroscopic order. We introduced the conjecture that industrial corporations, firms, are complex dissipative structures showing a degree of macroscopic order. This observation provides an alternative answer to the question of Coase (1937) regarding the nature of the firm: Firms are complex dissipative structures. These structures appear beyond the linear region. This idea is to the opinion of the author very fruitful and it may bridge a gap just as the application of the theory to biological evolution drastically deepened the fundamental understanding of the processes involved there. We highlighted one of the problems to follow this line of reasoning. In the strictly linear range 77
The non-linear free value transducer: Sustained evolution
dissipative structures, once established, are intrinsically stable and no further evolution takes place. This clearly conflicts with the observation of a sustained evolution both in biology and in the organizations in our society, including markets and industries. In this chapter we take the final theoretical step to extend the VTT into the non-linear non-equilibrium range. This will closely follow the developments in non-linear non-equilibrium thermodynamics in the last decades. The relevant references were mentioned. The theory is strongly mathematically flavored and in the authors opinion the full understanding of the mathematical intricacies is secondary to the purpose of this work. We will follow a partly intuitive approach with less mathematical rigor. The reader is referred to the original literature for mathematical detail. 8.2. Evolution beyond the linear range. The expression for the production of statistical entropy, 3 I in a system in which forces, X i drive transactions at rates, J i follows as:
¦J X
3I
i
i
(8.1)
i
In order for the system to maintain a steady state beyond thermodynamic equilibrium against the forces of the second law a source of free value must exist to which the system can effectively couple, thus at least compensating the production of statistical entropy in the system by feeding on that source of free value. This argument can be reversed. Once a source of free value exists in the environment systems will inevitably develop that couple to this source of free value. The time derivative of the production of statistical entropy follows as: d3 I dt
¦J i
i
dX i dJ ¦ Xi i dt dt i
(8.2)
It shows that, close to equilibrium, where strict linearity in the relations between flows and forces exists and the reciprocity relations hold (see chapter 7), and coupling to a source of free value can take place, a steady state develops that is stable with respect to fluctuations. In the strictly linear range the two contributions at the right hand side of eqn. 8.2 are equal and the stability of the steady state is guaranteed by the fact that the production of statistical entropy reaches a stable minimum. Note 8.1. Considering a system with two flows and two forces and assuming that the relations between the flows and the forces are indeed strictly linear and that the reciprocity relation holds, it follows:
J1
L11 X 1 L12 X 2
J2
L12 X 1 L22 X 2
The first term at the right hand side of eqn. 8.2 is:
78
The non-linear free value transducer: Sustained evolution
¦J
i
i
dX i dt
L11 X 1
dX 1 dX 1 dX 2 dX 2 L12 X 2 L12 X 1 L22 X 2 dt dt dt dt
The second term at the right hand side of eqn. 8.2 follows as:
¦X
i
i
dJ i dt
X 1 ( L11
dX 1 dX 2 dX 1 dX 2 L12 ) X 2 ( L12 L22 ) dt dt dt dt
By inspection it becomes clear that the two terms are equal indeed. Furthermore, both terms vanish at a steady state and an extremum (a minimum, see chapter 7) is reached. We note that in order to obtain this result we assumed that the phenomenological constants are independent of the forces. Otherwise the relations for the terms in eqn. 8.2 will be much more complex. Furthermore, the reciprocity relations should hold. If the latter condition does not apply the contributions become:
i
dX i dt
¦X
i
¦J
i
i
dJ i dt
L11 X 1
dX 2 dX 1 dX 1 dX 2 L12 X 2 L21 X 1 L22 X 2 dt dt dt dt
X 1 ( L11
dX 1 dX 2 dX dX 2 L12 ) X 2 ( L21 1 L22 ) dt dt dt dt
The two contributions are no longer equal. Beyond the linear region the second law no longer guarantees the stability of steady states. Fluctuations may drive the system away from the steady state and may result in jumping to another steady state or at least an evolution away from the steady state . Away from equilibrium, beyond the strictly linear range the precise nature of the interactions between the entities in the system and with the environment play an important role; it determines the stability against the fluctuations that test the stability of the steady state. The steady state may be stable with respect to one type of fluctuations and unstable with respect to another type. We will further develop this argument in a while. First the mathematical argument is developed following the discussion of Nicolis and Prigogine (1977). These authors show that only the first term at the right hand side of eqn. 8.2 is negative definite and vanishes at a steady state, i.e.:
¦J i
i
dX i d0 dt
(8.3)
The equality sign in 8.3 applies to a steady state. Expression 8.3 represents the general evolution criterion as introduced by Nicolis and Prigogine. Now consider a system in a steady state and assume that a fluctuation perturbs both the flows and the forces. It shows convenient to introduce the excess statistical entropy production w X 3 I , it is given by:
wX 3I
¦ wX wJ i
i
(8.4)
i
79
The non-linear free value transducer: Sustained evolution
It is shown that in the strictly linear range the excess statistical entropy production is positive, this means that the steady state is stable. In the non-linear region both negative and positive excess production of statistical entropy may apply; a steady state is no longer stable with respect to all possible fluctuations. If the excess entropy production is negative the steady state is unstable. This can be mathematically phrased as follows:
w X 3 I ! 0 stable steady state
(8.5)
w X 3 I 0 unstable steady state (8.6) If the excess entropy production equals zero we have marginal stability and we reach a bifurcation point. The bifurcation point is a threshold where the steady state ceases to be stable. Note 8.2. We assume that the strictly linear relations introduced in Note 8.1 apply:
J1
L11 X 1 L12 X 2
J2
L12 X 1 L22 X 2
Calculating the excess entropy production proceeds by first developing equations for wJ 1 and wJ 2 :
wJ 1
L11wX 1 L12wX 2
wJ 2
L12wX 1 L22wX 2
Application of eqn.8.4 leads to the following expression for the excess entropy production: w X 3I
L11 (wX 1 ) 2 L12wX 1wX 2 L12wX 1wX 2 L22 (wX 2 ) 2
This can also be written as: L11 (wX 1 ) 2 2L12wX 1wX 2 L22 (wX 2 ) 2
w X 3I
There are several ways to prove that the expression above is always positive and that steady states in the strictly linear region are intrinsically stable. The first approach develops as follows. We can, using eqn. 8.1 write the following expression for the statistical entropy production using the linear phenomenological equations: 3I
80
L11 X 12 2L12 X 1 X 2 L22 X 22
The non-linear free value transducer: Sustained evolution
The entropy production is outside thermodynamic equilibrium always positive. Mathematically speaking the equations for production of statistical entropy and excess production of statistical entropy are equivalent. The only difference is the nature of the variables. Because of this mathematical equivalency the properties of both equations are the same. Hence, the excess entropy production is always positive and the steady states in the linear region are stable. For those readers that prefer a more rigorous approach the following reasoning develops. The equation for the excess production of statistical entropy can be written as:
wX 3I
L11 (wX 1 ) 2 (1 2
L12 wX 2 L22 wX 2 2 ( ) ) L11 wX 1 L11 wX 1
As the term outside the brackets at the right hand side is positive, the term in brackets needs to be positive for a stable steady state. We can also write the above equation as:
wX 3I
L11 (wX 1 ) 2 (1 2
L12 L y 22 y 2 ) L11 L11
Where y is a dummy variable given by:
y
wX 2 wX 1
First we prove that the equation in brackets has no real number solutions that equal zero, this is the case if the determinant of that equation is negative:
4(
L12 2 L ) 4( 22 ) 0 L11 L11
This inequality is straightforwardly obtained if we consider the following quadratic equation: ax 2 bx c
0
High school math tells us that the solutions to this equation are:
x1, 2
b r b 2 4ac 2a
As the square root of a negative number is not a real number the inequality above follows if the relevant constants are substituted. In chapter 6 we derived that, due to the second law, the inequality introduced above always holds as the degree of coupling of the linear free value transducer cannot exceed unity. 81
The non-linear free value transducer: Sustained evolution
If no real solutions to the quadratic term in the expression for excess production of statistical entropy exist, this function is either always positive or always negative. It is also clear that the quadratic equation does always have either a minimum or a maximum. If it is a minimum we will have proven our point, the excess production of statistical entropy will always be positive. To verify this we consider the second derivative of the quadratic equation with respect to y:
w2 f dy 2
L22 L11
In chapter 6 we showed that the coefficients for the direct coupling of a flow to its conjugate force are always positive, this proves what we set out to show. The following argument applies. In the non-linear region the detailed nature of the fluctuations that perturb a steady state becomes relevant; the steady state may be stable with respect to some fluctuations and unstable to others. This leads to an interesting conflict; the initial information in a reduced information description of the system only contains average values of the macroscopic quantities and information about the probability distribution of those values.
Fig. 8.1. Development of dissipative structures: An elementary bifurcation diagram.
The detailed microscope state of the system is far from known and it is the richness of the microscopic behavior that defines the type of fluctuations that may occur. That information is by the very nature of the macroscopic description not available. The development routes the system takes come as a surprise to the reduced information observer. To follow Monod (1971) the system is subject to Chance and Necessity. Fluctuations that appear random to the macroscopic observation introduce chance, given the nature of the system such developments will happen, DQGEHDQHFHVVDU\FRQVHTXHQFHRIWKHV\VWHPV¶GHWDLOHGPLFURVFRSLFVWUXFWXUH The discussion can, with reference to fig. 8.1, be summarized as follows. If a system is left to age in the absence of coupling to an external source of free value it reaches, after a sufficiently long period of time, a state of thermodynamic equilibrium. In that state the macroscopic appearance of the system no longer changes with time. In addition its statistical 82
The non-linear free value transducer: Sustained evolution
entropy is maximized and the system is not able to engage in value adding transactions. This is point A in the bifurcation diagram in fig. 8.1. If the system succeeds in coupling to a source of free value in the environment, a new steady state may emerge. Initially, at moderate forces characterizing the source of free value, the system moves further and further away from equilibrium and the resulting steady states remain stable as long as we remain in the strictly linear region. No developments that are unexpected based on the macroscopic reduced information picture take place. The development goes along the so-called macroscopic branch AB of the bifurcation diagram. In terms of economic activity point A would reflect the perfect competition world, where everybody supports his own needs by buying and selling goods at equilibrium prices, no free value is created in these transactions. Along the branch AB some elementary kind of specialization and division of labor may occur and asymmetries in information with respect to certain goods and services may start to develop and allow some kind of return to the specialists with more information. At point B the source of free value has become so potent (this may be caused by the increasing information advantage of some actors) that the macroscopic branch becomes unstable to some of the fluctuations in the system. A new development, indicated by the line c in our bifurcation diagram, indicates the new stable states. It reflects macroscopically organized states different from the unstable less organized states obtained by extrapolation of the macroscopic branch. The system may as an example have developed superior technologies and is thus be able to extract more free value from resources and needs in the environment. The actors may at first observe the system to continue developing along the macroscopic branch DQG QRWKLQJ ³XQH[SHFWHG´ RFFXUV +RZHYHU WKH PDFURVFRSLF EUDQFK KDV EHFRPH XQVWDEOH WR some of the fluctuations that take place and at a certain moment in time the system jumps from the macroscopic branch to the branch indicated c. Note 8.3. The phenomenon of instability of the macroscopic branch beyond a certain critical limit can be illustrated using a simple physical example, albeit that this is only limitedly representative. Consider water at an ambient temperature, where it is liquid. In the fluid the molecules can move around relatively freely and the structure does not show order from the macroscopic perspective. In fact this absence of order is partly an illusion. At 25 degrees centigrade there is already appreciable hydrogen bonding, i.e. bonding on the basis of interaction between the hydrogen and oxygen atoms of different water molecules. These bonds are still of a transient nature and do not drastically hinder the water molecules from moving around relatively freely. Different values for the average number of hydrogen bonds per water molecule at 25 degrees appear in the literature. The values range from about 2.4 to 3.2. For liquid water at 0 degrees a number of 3.7 has been reported, i.e. already very close to the number of four representative for ice. At ambient pressure liquid water is no longer the stable form at 0 degrees. It changes into ice, an ordered solid structure in which each water molecule engages as said in four hydrogen bonds with its neighbors. This is an example of a phase transition resembling the appearance of order in the region where the macroscopic branch is no longer stable and more complex structures appear, i.e. as in the situation regarding the stability of the macroscopic branch discussed above. We stressed that in the appearance of complex dissipative structures their formation may not directly occur if we pass the critical point where the bifurcation is located. The system may continue to develop along the thermodynamic branch and suddenly jump to the more stable complex dissipative structure some distance beyond the 83
The non-linear free value transducer: Sustained evolution
critical point. This resembles the situation in liquid water where the phenomenon of super cooling of up to almost -40 degrees has been observed, i.e. water remains liquid beyond the limit of its normal freezing point. The analogy with the situation of formation of complex dissipative structure beyond the critical point is apparent. The difference is that dissipative structures are non-equilibrium structures whilst ice, although more ordered than liquid water, remains an equilibrium structure.
Fig. 8.2. The Evolution of Dissipative Structures: A more complex Bifurcation Diagram showing successive primary and secondary and tertiary bifurcations. In a more complex system the situation may be more involved; the bifurcation diagram depicted in fig. 8.2 may apply. Again the system is initially at equilibrium and it starts developing along the macroscopic branch by the creation and exploitation of increasingly potent sources of free value. At point A, the critical point, characterized by an excess statistical entropy production of zero, two types of development become open as the system leaves the macroscopic branch; these developments are indicated AB and AC. The macroscopic observer has no way of knowing what will happen, not only are both development paths unexpected to the observer but, in addition, he has no way of knowing which branch the system may take or even being aware of the fact that the system follows a new development route. The behavior of the system becomes unpredictable. The developments become revolutionary to the observer. As the bifurcation diagram develops new bifurcation points B and C are reached at which the system again chooses between accessible development routes. This may again happen at point D. The behavior pointed out above has important consequences. The first important feature is that once we have passed several bifurcation points the state of the system depends very much on the path the system took at the various bifurcation points. The historical evolution depends RQ WKH ³FKRLFHV´ WKDW ZHUH PDGH RU alternatively, the fate of the system at the various bifurcation points. This strongly reminds of features of biological evolution, career paths of people and the historical dimension of the development of firms. From an equal situation in the past the evolution may lead to very different situations today. Numerous examples can be found in industry as an example in the pharmaceutical industry where the big pharmaceutical firms of today have histories ranging from a drugstore (GlaxoSmithKline) to a producer of citric acid (Pfizer) and a producer of artificial colorants (Roche).
84
The non-linear free value transducer: Sustained evolution
The second feature is related to the future. The reduced information picture of reality leads to an unpredictable future; this does apply to all actors in the market, even to the management of a firm, although that management can be expected to have more information about the present state of the firm than observers outside the company. Even for the management full understanding of the competitive environment, the structure of the market in which they operate and the detailed knowledge of the exact value microstate their firm is in requires far more information than it can economically obtain. The ramifications of this situation for aspect as strategic management are clear and are discussed in somewhat more depth in FKDSWHU :H UHFRYHU WKH FRQFHSW RI ³ERXQGHG UDWLRQDOLW\´ DV LW KDV EHHQ SURSRVHG E\ among others, Williamson (1975). So far we have only discussed the stability characteristics beyond the critical limit where the strictly linear region ends. Another point of interest are the transients involved when a structure becomes unstable and evolves along another stable development route that takes over. Furthermore, the ways in which states that are stable against fluctuations return to the steady state if it is disturbed by fluctuations have remained untouched. It is the area of the linear and non-linear stability theory of complex system and their evolutionary dynamics. It is beyond the scope of this book to treat this important subject in detail. It is, however, to important to the subject matter of this work to leave it completely untouched. In some instances the dynamics of shifts between steady states and the return to steady states after a disturbance by a fluctuation may be smooth, i.e. a gradual development results. This is, however, certainly in complex systems, exception rather than rule. The theory of system dynamics introduces the concept of so-called relaxation times, characteristic for the dynamics of the damping of fluctuations and the movement to another state if the initial state becomes unstable. A smooth transition takes place if the relaxation times are real numbers. If, however some of the relaxation times are complex numbers wave like phenomena and oscillations may be a characteristic of the development of the system. This has been observed in e.g. patterns of chemical reactions such as in the metabolism of organisms (Glansdorff and Prigogine (1971)) where oscillations with a variety of characteristic times are observed. Also in biological ecosystems (Nicolis and Prigogine (1977)) such phenomena are common. It is not difficult to see the relevance of this behavior for the modeling and analysis of the socioeconomic systems that are the subject of this book. Short and long term fluctuations in the stock markets and the various cycles that appear in our economies are easily accommodated in this framework. This section developed the foundations for the extension of VTT into the non-linear region. To illustrate these concepts some physical examples will be discussed in the next two sections. The reader can choose to skip these illustrations without sacrificing material that is necessary to understand the remainder of the arguments. However, the physical examples are illustrative for phenomena that characterize markets and industries. 8.3. The Benard problem and instability of the macroscopic branch. An interesting example that serves to illustrate instability of the macroscopic branch is the Benard problem. We consider a system that is very simple indeed; it is depicted in fig. 8.3. A fluid contained between two parallel plates is heated from below. The temperature gradient between the two plates provides a source of free energy. A flow of heat from the lower to the upper plate develops as a consequence of the force created by the temperature difference. For a moderate difference in temperature between the upper and the lower plate, the heat flow through the fluid takes place by conduction, i.e. by interaction between individual molecules. Macroscopically observable movement of the fluid does not take place. This situation can be
85
The non-linear free value transducer: Sustained evolution
adequately described by a very simple macroscopic mathematical model, such as the Fourier equation introduced in chapter 7.
Fig. 8.3. The Benard problem: A fluid is contained between two parallel plates and heated from below.
Fig. 8.4. The Benard problem: Structure formation at a temperature difference exceeding the critical level. The situation changes when the temperature difference between the upper and the lower plate exceeds a critical value. The steady state in which heat is transported by conduction, without observable macroscopic movement of the fluid, is no longer stable and we leave the conduction regime. The fluid now develops more or less regular, macroscopically observable circulation patterns, so-called convection cells or vortices. In these patterns many millions of molecules cooperate in a concerted fashion. Apparently the system becomes organized by the free energy gradient exceeding a critical level. The situation is depicted in fig. 8.4. Another interesting feature is that the dissipation of free energy is larger than the rate corresponding to a continuation of the macroscopic branch. This implies that conduction is less effective in coupling to a gradient in free energy, than the cooperative efforts of the PROHFXOHVLQWKHFRQYHFWLRQFHOOV7KHVHPROHFXOHV³FRRSHUDWH´WRWDNHDGYDQWDJHRIWKHIUHH energy gradient in the environment. Loosely phrased this can also be interpreted as follows. As soon as the temperature gradient exceeds a certain critical value an opportunity arises to exploit a source of free energy (or free value) to a wider extent than would be possible on the basis of the interaction of single entities. The possibility exists to exploit the resource contained in the gradient more 86
The non-linear free value transducer: Sustained evolution
exhaustively if groups of entities cooperate in more or less ordered structures. This points to an analogy with the nature of the firm. To some authors (Alchian and Demsetz (1972)), a basic feature of the firm rests in team production, based on the coordinated effort of many individuals. The foregoing was fully phrased in terms of free energy, i.e. a completely physical example was highlighted. We can also treat a free value analogue. If in an economy a gradient of free value exists and is discovered, as an example this can be a difference in information between a buyer and a seller, a non-equilibrium structure may develop that exploits this gradient in free value in a more effective way than would be possible on the basis of an exchange between single actors. At first, in the strictly linear region no drastic changes in the structure of the system will result. This is the situation in which the gradient in information is still relatively small. At a certain critical size of the gradient in free value the system drastically changes and ordered structures appear that exploit the corresponding force in a more effective way. In fact these structures may be responsible for further increasing the gradient in free value, e.g. by increasing their information advantage. After a while it becomes hard to judge whether the structures exist because of the gradient of free value or whether the structures themselves create the source of free value. A chicken and egg situation arises and the question what came first becomes irrelevant. A number of actors may choose to cooperate to create and exploit the source of free value. As an example, initially some form of primitive cooperation such as a tribe or a master-apprentice relation develops; later on complex multinational firms result. This resembles as an example the situation in colonies of social insects such as termites (Prigogine and Stengers (1984)). We now summarize some important features of the elementary example of the formation of dissipative structures discussed in this section. Firstly, there is the condition that the force driving the flow of free value needs to exceed some critical lower limit. Only if there is sufficient free value potential to harvest, the development of dissipative structures is possible. Secondly, there is the aspect of cooperation between the entities in the system. The potential for harvesting free value can, beyond a critical limit of the force, only be effectively exploited if entities choose to cooperate. We again return to the UDWLRQDOHEHKLQG³WKHILUP´ that Coase (1937) sought and some FRQFHSWRI³WHDPSURGXFWLRQ´ as suggested by Alchian and Demsetz (1972). Thirdly, some form of a non-linearity, a deviation from a strictly linear relation between flows and forces, needs to be in place. In the case of the Benard problem this non-linearity exists in the inertia phenomena that are inherent to the movement of macroscopic amounts of fluid. This leads to some crude form of a memory and a reproduction mechanism, which stabilizes the convection cells once they have emerged. The remainder of our discussion in this chapter shows that the features, appearing in this elementary example, return as sufficient and necessary conditions for the development of dissipative structures. Note 8.4. The reader may have some difficulty in understanding the relation between inertial forces and a memory effect. Still this author is convinced that the majority of the readers have experienced this relation in their daily life. If we drive a car the problem of inertia can be felt if we suddenly have to brake. Many of the objects seem to remember their speed before the brake was engaged, and the objects, including the driver, express this memory by moving towards the windshield of the car during the braking operation. 8.4 Instability of the macroscopic branch in chemical reaction systems. The next example analyzes the conditions leading to instabilities of the macroscopic branch in 87
The non-linear free value transducer: Sustained evolution
systems where chemical reactions take place. First a simple transformation process according to the following equilibrium reaction is considered:
RX
(8.7)
This reaction scheme implies that a resource, R, is transformed into a product, X, and that the reverse process is also possible. For readers, that are less familiar with the formalism of chemical kinetics, it is also possible to think of the reaction as a transaction in which a product R is sold and money X is obtained and vice versa. Applying so-called mass action law kinetics (Roels (1983)), the net forward rate of the process 8.7, RF follows as:
RF
k1C R k 1C X
(8.8)
In eqn. 8.8 k1 is the rate constant of the forward reaction, C R is the concentration of the reactant, k 1 is the rate constant of the reverse reaction, C X is the concentration of the product.
Note 8.5. Eqn. 8.8 is based on so-called mass-action law kinetics. This assumes that the rate of a chemical reaction is proportional to the product of the concentrations of the reactants to a power given by the number of molecules (or distinguishable entities) that participate in the reaction. As an example consider the case of a reaction between A and B to give a product C:
A B oC The rate R of this reaction is, according to mass-action law kinetics, given by:
R
k1C AC B
gR ). T A fundamental result of thermodynamics leads to the following expression (Roels (1983)):
The force driving the reaction 8.7 X F is '(
gi
g i0 RT ln Ci
(8.9)
In this equation g i is the molar free energy of compound i , g i0 is the molar free energy of i at unit activity, R is the universal gas constant, T is the absolute temperature and C i is the concentration of compound i. The formulation of eqn. 8.9 assumes the idealized case in which the affinity of compound i equals its concentration. If we apply eqn. 8.9 the following expressions for the force follows:
XF
88
'(
g R0 C ) R ln X T CR
(8.10)
The non-linear free value transducer: Sustained evolution
g R0 ) is the affinity of the reaction at unit activity of the reactant and the product. T Consider a fluctuation in the system resulting in fluctuations in the flows and the forces, the following relation can be understood to follow: Where '(
wRF
(k1 k 1 )wC R
(8.11)
This equation straightforwardly derives from eqn. 8.8 considering that a change in C R results in an equal negative change in C X Furthermore, from eqn. 8.10 it follows: wX F
RwC R (
1 1 ) C X CR
(8.12)
The excess entropy production follows from eqn. 8.4, using eqns. 8.11 and 8.12:
w X 3I
R(wC R ) 2 (k1 k 1 ) (
1 1 ) C X CR
(8.13)
Eqn. 8.13 shows the excess statistical entropy production positive, hence we expect a stable ³PDFURVFRSLFEUDQFK´HYROXWLRQZLWKRXWDQ\³XQH[SHFWHG´GHYHORSPHQWV As a contrast we introduce a different system, it is a so-called autocatalytic process according to the reaction scheme:
R X 2X
(8.14)
Eqn. 8.14 introduces a system in which a compound or an entity enhances its own rate of synthesis. This situation, an example of autocatalysis is widely encountered in biological systems and also in our human society, e.g. in our economy. By applying mass-action law kinetics the following expression for the rate of the reaction depicted in 8.14 follows: RF
k1C R C X k 1C X2
(8.15)
From eqn. 8.15 we derive the following expression for the change of the rate of reaction on a small fluctuation: wRF wC R (k1C X k1C R 2k 1C X ) (8.16) The force driving the reaction does not change compared to the earlier example, i.e. eqn. 8.12 still applies to the fluctuation of the force. If eqns. 8.12 and 8.16 are combined the following expression for the excess entropy production follows for the modified kinetic scheme: wX 3I
RwC R2 (
1 1 )(k1C X k1C R 2k 1C X ) C R CX
(8.17)
From eqn. 8.17 it follows that the change of the kinetic schema has a profound influence on the excess entropy production on coupling to the same force. The excess entropy production 89
The non-linear free value transducer: Sustained evolution
becomes negative if the following inequality holds:
C R k1 2k 1 ! CX k1
(8.18)
This restriction shows that in case the force driving the autocatalytic process exceeds a limiting value, the excess entropy production becomes negative and the continuation of the macroscopic branch becomes unstable. New unexpected developments may occur; the system may evolve beyond the macroscopic branch. The reader should note that this results from the different way in which the system couples to the same force as in the first example in this section, where the macroscopic branch showed stable. Of course, a single reaction can never lead to a lasting evolution away from equilibrium, only if such reaction is part of a pattern of reactions, such as in a metabolic network in a microorganism, it may trigger instability of the macroscopic branch. The cases studied in this section and the preceding one identify the following conditions for the development of complex dissipative structures through instability of the macroscopic branch: x x x
Availability of a sufficiently potent source of free value Transformation processes characterized by a non-linear dependency of the flows on the forces, particularly also when the common situation of autocatalysis occurs. The existence of fluctuation that continuously test steady states for their stability.
Again basic features result, that may start to become familiar. 8.5. A generalized free value transducer.
J1
X1
Free Value
X2
Transducer
J2
Fig. 8.5. A non-linear free value transducer. We discussed examples from physics and chemistry to illustrate instabilities in the non-linear range. Here we will study a free value transducer resembling the linear free value transducer introduced earlier in chapter 6. It is shown in fig. 8.5. The case discussed here differs from the linear free value transducer by the presence of nonlinearities. The transducer is driven by a force, X 1 the rate of the conjugated downhill flow is J 1 . The transducer couples to X 1 to produce free value in an uphill process, against a, negative, force X 2 at a rate J 2 . 90
The non-linear free value transducer: Sustained evolution
The rate equations describing the transducer are:
J1
L11 X 1 L12 X 2
(8.19)
J2
L21 X 1 L22 X 2
(8.20)
Eqns. 8.19 and 8.20 have the same structure as in the linear free value transducer, however, this time we assume that only L11 , L22 and L12 are true constants. We have relaxed the reciprocity relation and assume L21 to depend on the force X 2 :
L21
f (X2)
(8.21)
The force X 1 is constant; the force X 2 is subject to fluctuations wX 2 . According to eqn. 8.4 the excess production of statistical entropy is:
w X 3I
wJ1wX 1 wJ 2 wX 2
(8.22)
As X 1 is constant the first term at the right hand side of eqn. 8.22 is zero. We calculate the fluctuation in the uphill flow to obtain an expression for the excess production of statistical entropy. From eqns. 8.20 and 8.21 we obtain:
wJ 2
( L22 X 1
wf ( X 2 ) )wX 2 wX 2
(8.23)
Combining eqns. 8.22 and 8.23 leads to:
w X 3I
( L22 X 1
wf ( X 2 ) )wX 22 wX 2
(8.24)
The excess statistical entropy production is negative and the steady state unstable, if the term in brackets at the right hand side of eqn. 8.24 is negative, i.e.:
L22 X 1
wf ( X 2 ) 0 (8.25) wX 2
Eqn. 8.25 is rearranged to:
91
The non-linear free value transducer: Sustained evolution
wf ( X 2 ) L22 wX 2 X1
(8.26)
Eqn. 8.26 results in a constraint beyond which the steady state is no longer stable and the system starts evolving. The state to which the transducer evolves may be a state of higher complexity, exactly like the situation in the physical cases we discussed in the previous examples. Also non-linear non-equilibrium VTT leads to the possibility and the necessity of evolution into states of higher complexity. Eqn. 8.26 proves another point; if X1 increases the restriction gets less severe. A higher force of the source of free value will tend to increase the possibilities to get instability of the steady state. This is a point we already observed in the physical examples. The second point is that the derivative at the left hand side of inequality 8.26 is negative, i.e. a larger absolute value of the negative force X 2 will lead to increased coupling to the flow J 2 , i.e. the kinetics show some kind of autocatalysis. This is a feature we also observed in the physical examples. Finally, we turn to the power output at the output side of the transducer, it is given by:
P
J 2 X 2
(8.27)
We formulate the following expression for the derivative of P to the output force:
wP wX 2
( X 2
wJ 2 J2) wX 2
(8.28)
From eqn. 8.23 we obtain:
wJ 2 wX 2
( L22 X 1
wf ( X 2 ) ) wX 2
(8.29)
Combining eqns. 8.28 and 8.29 results in:
wP wX 2
( X 2 ( L22 X 1
wf ( X 2 ) ) J2) wX 2
(8.30)
If we realize that X 2 is negative due to the uphill force at the output of the transducer, J 2 must be positive for a meaningful transducer exhibiting a positive uphill flow and considering expression 8.25, it follows that an increase in absolute value of the uphill force, i.e. if it gets more negative, leads to an increased power output of the transducer, i.e. it improves its performance. We clearly see a learning by doing increase of performance on instability of the steady state and the right type of fluctuation. This does not go on forever. Inequality 8.26 must be fulfilled and the function f ( X 2 ) cannot increase consistently with increasing negative value of the output force, this would lead to a violation of the constraint on the value of the degree of coupling, it should remain smaller than unity. Hence at some value of the output force, the state becomes stable again and we have reached a new steady state. Our simple model shows many of the characteristics we indentified in studying the physical examples. It brings these phenomena within the realm of VTT. 92
The non-linear free value transducer: Sustained evolution
8.6. Evolution through fluctuation and selection: The general case. The preceding sections highlighted that under a number of quite common conditions, the macroscopic branch of the evolution of macroscopic systems near equilibrium, becomes unstable. Complex ordered structures appear unexpected from a reduced information, macroscopic, perspective. In this section some new elements will be introduced. The first one is competition between the entities that are present in the system. Competition is a quite common driver of evolution in biological systems and it certainly also is a characteristic of our economies where firms contest markets for goods and services. Competition occurs in an environment where resources, sources of free value, are scarce. In that case the interactions between the entities in the system take the shape of a struggle for live in the quest for sources of free value. Situations of pseudoequilibrium may appear to exist, but more commonly the situation is a dynamic one in which new entities emerge and existing ones grow or decay and become extinct. In this section we follow the discussion of Nicolis and Prigogine (1977) and extend it to economies and markets. Such an endeavor is not new but a consistent discussion of the extension to economic systems is to the knowledge of the present author still lacking. More particular the relation with VTT and information will be highlighted. This presents a new approach and leads to new insights. As a case in point we discuss biological evolution in the context of the work of Prigogine and Nicolis (1977). This is material relevant to the discussion in this work if we remember that industrial organizations, science, technology, culture, art, economies, nations,... are products of a biological evolution that started eons ago. It is basically fuelled by the source of free value in the solar radiation that reaches the earth. The analogy between the dynamics of competition for markets and the existence of firms with biological evolution has been highlighted in the literature (e.g. Alchian (1950), Hirshleifer (1977), and Beinhocker (2007)). There also exists an evolutionary approach towards strategic thinking (Nelson and Winter (1982), Dopfer (Ed.) (2005)). The present day theory of biological evolution starts from the fact that the information that is needed to construct and maintain and grow a biological structure is contained in a vast chemical code in the form of large macromolecular entities termed DNA (deoxyribonucleic acid). The code contains four symbols, which in groups of three (codons), code for a limited number (about 20) of structural elements (amino acids) and some editing codes. The amino acids form the basis of a wide variety of macromolecular structural elements and catalysts. In this way the genotype of an organism (the genetic code) translates into the phenotype, which allows the functioning of the organism and forms the basis of its ability to compete. One of the important processes in the growth of biological entities rests in the copying of the genetic information to pass it on to the new generations. This copying process is of imperfect fidelity, this is partly due to the chemical characteristics of the code and the proofreading mechanism that the cell uses to ensure copying fidelity. It is also a basic feature of survival value. We will return to this later on. The reader is referred to some worthwhile and accessible accounts of the nature of the genetic code (Dawkins (1976, 1987), Eigen (1971)). The above characterization reveals that life is an information based game and this analogy can easily be extended to the human organizations that are encountered in everyday life. To highlight the complexity of the information contained in living systems consider the information needed to specify the human genome. It requires some 1010 bits of information. Construction of all the different DNA-species that are possible on the basis of this amount of information would require very much more than all the matter present in the universe.
93
The non-linear free value transducer: Sustained evolution
It was already noted that the replication of the genetic code contains errors and hence variation is induced in the genotype that is passed on to the next generation. This will result in changes of the phenotype. This is the basis of sustained evolution of biological systems and it is an important feature of competitiveness at the species level. The theory has been summarized in the important contributions of Manfred Eigen (1971, 1977, 1978a, 1978b). We again stress that the replication errors in the multiplication of DNA are an important driver of sustained evolution. These provide the fluctuations that test the stability of the system. These errors provide, in principle random, variation of the genotype and the resulting phenotype and are not goal oriented, i.e. do not provide a direction to evolution. It is the environment with its sources of free value that is shaped by the competition between the actors that provides an arrow to evolution. A necessary condition for a direction for evolution is an element of scarcity of resources, scarcity of sources of free value. This allows a selection in the variety of phenotypes and associated genotypes. The source of variation is at the level of the genotype, whilst selection takes place at the level of phenotypes. As soon as there is scarcity of resources a process of natural selection and survival of the fittest emerges (Darwin (1872)). In the evolution process structures develop that are, given the resources that exist or can be created in the environment and the competition with the other actors, optimally suited to survive and hence will survive. This is a dynamic equilibrium with an important historic dimension. This is also the case if we look at competitive dynamics in industry. An industry structure does not exist but is subject to a constant process of evolution. This is an important aspect of modern approaches to competitive strategy in industry. The picture of evolution, through variation and selection leading to survival of the fittest, implies that self organizing, self maintaining, reproducing systems already exist. It provides no clue how the first organized structures came into being. The only thing the theory provides is that these are the inevitable result of fluctuations appearing at the moment that the macroscopic branch is no longer stable.
Variation Information
Selection
Resources
Organization
Competition
Fig. 8.6. Dissipative Structures: Learning Systems The arguments developed above are summarized in fig. 8.6. The essential mechanisms are the following. Dissipative structures exist that are able to store and transmit information about their organization. The information is transformed into the actual physical shape of that information, 94
The non-linear free value transducer: Sustained evolution
the organization or organism. In this process copying of the information subject to error or variation takes place. For a firm this transformation of information results in among others its physical assets, its products and it services and its human resources. The nature of the variation merits some additional remarks. This can be simple errors in the copying or interpretation of its basic information or it can be deliberate experimentation such as in the research and development based introduction of new products. However, in any instance reality is too complex to fully understand and even the best R&Dbased new product introduction rests on incomplete information and hence bounded rationality. With its organization the system couples to and creates sources of free value and it competes with the other actors in the environment. Selection takes place. This impacts again on the information base of the entity and the process starts all over again. In summary the essence of the dissipative structure is an information set that creates and competes for sources of free value in the environment. 8.7. The starting point of biological evolution, prebiotic evolution. Some 3-4 billion years ago, under the conditions on primitive earth, small organic compounds, such as the basic building blocks of living systems, amino acids, nucleic acids and sugars, were synthesized in small but significant amounts. This view is widely accepted (Miller and Orgel (1974)) and it is supported by laboratory experiments showing the formation of these compounds in models of the eaUWK¶VSULPLWLYHDWPRVSKHUH2QHRIWKHSUHUHTXLVLWHVLVDJDLQWKDWDVRXUFHRI free energy exists to which the uphill processes for the synthesis of these compounds can couple. The abundant source of energy provided by the solar radiation directly or indirectly provided such a source of free energy. The next step is that these compounds get concentrated in small regions of space by adsorption to surfaces or by the evaporation of water from small pools. This process of concentration favored the synthesis of polymeric substances. Some of these molecules exhibited the property that they provided a template for their own synthesis, autocatalysis appears on the stage set on early earth. As shown before autocatalysis provides a situation in which the macroscopic branch is no longer stable and increasingly complex molecules can develop and complexes of molecules appear. This provided the precursors for the appearance of the first self replicating units, units of which WRGD\¶V ELRVSKHUH SURYLGHV D PXOWLWXGH RI H[DPSles. In the subsequent development the units grew in complexity in the quest for more effective ways to couple to the sources of free value in the environment and to create new sources of free value.
8.8. Competition and sustained evolution. This section discusses the stage of evolution beyond the point where the first self replicating units appeared. We briefly analyze the behavior of competing populations of self replicating units. We assume that the development outlined in the preceding section has resulted in populations of self replicating units that compete for sources of free value in the environment and are instrumental in the creation of new sources of free value. In our system new ways have been developed to couple to the sources of free value that exist in the environment or are made available by the very action of those self replicating units. The systems have learned to capture free value and a development away from equilibrium to increasingly complex ordered structures and systems of ordered structures starts to take place. The environment and the structures that appear have the following properties: x
There is an element of scarcity in the available and accessible sources of free value. 95
The non-linear free value transducer: Sustained evolution
x x
x x
The structures can be metabolized and are engaged in metabolism, i.e. sources of free value are degraded in downhill processes. Some of the polymers in the structures that appear have the property of self replication; they provide a mould, or the information, for their own synthesis. In addition, those molecules develop autocatalytic properties, i.e. they enhance their own rate of synthesis. Increasingly sophisticated ways of storing and communicating information are GHYHORSHG2IFRXUVHWRGD\¶V'1$SURYLGHVDVRSKLVWLFDWHGDQGSULPHH[DPSOHRIVXFK an information processing system. The replication process is not perfect. The properties of the molecules derived from the template differ from the mould. New molecular structures appear that differ from the ones derived from the original information set. Competition between the structures results in the selection of structures that more effectively compete for the scarce sources of free value in the environment.
Eigen (1971, 1977, 1978a, 1978b) has shown that if the conditions mentioned above apply only one or maybe a few types of molecules or structures that directly compete for a completely identical source of free value can generally survive. This is completely equivalent to a known feature of biological evolution, where mostly only one organism or a few organisms finally remain for every so-FDOOHG ³QLFKH´ $ QLFKH FDQ EH LQWHUSUHWHG DV RQH GLVWLQFW VRXUFH RI IUHH value, i.e. one need in the market. Here we postulate that this to a certain extent also applies to industry structure, this matter will be discussed more extensively in chapters 9 and 10. Specifically the molecule, or the structure, with the largest survival value, defined as the largest excess rate of reproduction, survives. If a structure is sufficiently complex, i.e. the amount of bits of information underlying the specification of its structure is large enough, evolution may be sustained, i.e. new, more successful, mutant copies continue to appear. This particularly becomes the case if the number of possible structures based on variation of the information underlying it, is much larger than the number that has been explored over the life time of the evolution of the system and its environment. It introduces the concept of sustained evolution and structures of competing entities in a constant process of change. The discussion above makes clear that given the fact that sources of free value exist or that these can be made available by self replicating information based structures in the system, innovations appear that increase the full exploitation of the potential free value in the system. This leads to LQFUHDVLQJO\FRPSOH[DQGLQFUHDVLQJO\³FDSDEOH´PXWDQWW\SHVRILQIRUPDWLRQ7KLVLQFUHDVHLQ the availability of free value serves to create more opportunity to fuel the appearance of new generally more complex and more adapted mutant forms of information and their associated phenotypes. The rate of evolution thus tends to increase. non-equilibrium
Self reproducing information
Selection for increased coupling
Innovation through fluctuation
Increasing forces
Fig. 8.7. Sustained evolution, evolutionary feedback.
96
The non-linear free value transducer: Sustained evolution
The system evolves further and further away from the less organized situation characterizing the macroscopic branch. Also the minimum dissipation of statistical entropy characteristic for the linear near equilibrium evolution of the system no longer applies. The excess rate of growth of the structures takes its place and this critically depends on the detailed kinetics at the microscopic level. This concept of sustained evolution has been called evolutionary feedback by Nicolis and Prigogine (1977). It is illustrated in fig. 8.7, which is an adapted version of the illustration in the aforementioned work. 8.9. The dynamics of competition. Consider an environment in which competing organized structures have evolved as a consequence of the appearance of the first self replicating molecules as described earlier. The next step is now to analyze the competition between various organized structures such as supra molecular structures like organism or, for that matter, organizations such as firms. Clearly at least the following features need to be taken into account: x x x x x
Reproduction of information. Variation in sets of information by imperfect copying fidelity. Translation of the information in the structure corresponding to it. Competition between the various entities. 6HOHFWLRQRIWKH³ILWWHVW´E\FRPSHWition for scarce resources.
7RDQDO\]HWKLVLQPRUHGHWDLOILJLVUHSURGXFHGKHUHDVILJIRUWKHUHDGHU¶VFRQYHQLHQFH In general a complex dissipative structure, such as a macromolecule, an organism, a human being, a technology or competence, an industrial organization, an economy, is fully characterized by the information that is needed to reconstruct it. Variation Information
Selection
Resources
Organization
Competition
Fig. 8.8. The dynamics of competition. In general an observer of the organized structure, even if he is an observer inside the structure, will not avail of all the information to fully characterize the detailed microscopic workings of the VWUXFWXUH +LV ³PDFURVFRSLF´ SLFWXUH LV FKDUDFWHUL]HG E\ VWDWLVWLFDO HQWURS\ WKDW TXDQWLILHV WKH lacking information about the structure. In a living organism an observer without any additional information would need the amount of information needed to select the specific DNA structure 97
The non-linear free value transducer: Sustained evolution
from all the possible combinations corresponding to the size of the genome. Note 8.6. The amount of information needed to understand the details underlying the genetic structure of even a relatively small organism like the enteric bacteria E. coli is very large indeed. If we take into account a genome size of 5.106DNA bases, and realize that at each SRVLWLRQIRXUQXFOHLFDFLG³OHWWHUV´FDQEHXVHGWKHDPRXQWRILQIRUPDWLRQQHHGHGZRXOG be 10,000,000 bits. In the case of an industrial organization this information is more difficult to grasp. It would include all information that is needed for the operations of the company, such as the information contained in its products, its captive market knowledge, the blueprints of its tangible assets, the information characterizing its competence and technology base, the information regarding its strategies and future plans. Some of this information is present in a written form or in computer files, some of it is in the heads of its human resources, tacit knowledge, some of it represents cultural aspects of the company. To augment on the example given in Note 8.3 the human genome can be considered it represents 6,000,000,000 bits of information. This would correspond to a choice of 1 out of the order of 102,000,000,000 DNA-base combinations. Even if a much simpler case is taken, the magnitude of the selection problem becomes apparent. A single hemoglobin molecule, the oxygen carrying protein in blood, consists of four chains of amino acids (polypeptides) twisted together. One of these chains contains 146 amino acids. Each amino acid is one out of the natural 20 that occur in proteins in organisms. The total amount of possible combinations is equivalent to 10190, this can be compared to the 10100 that could simultaneously exist in the universe if the mass present in the universe would allow filling it up in a closest packing of molecules. If the estimated mass present in the universe is taken into account, the number of molecules that could simultaneously exist decreases to about 1023 ; minute if compared to the total number of combinations. If this discussion is extrapolated to an organism or an enterprise, the significance of the limitations to the magnitude of the amount of information that is realistically obtainable, becomes apparent (Eigen (1971).
Fig. 8.9. Exploiting a source of free value by products and services of increasing sophistication. As soon as a potential source of free value appears in the system, which can, as an example be the demand for a product, a development takes place by which entities appear that satisfy the 98
The non-linear free value transducer: Sustained evolution
demand and feed on the free value associated with it in an increasingly sophisticated way. Generally this will also lead to an increasingly potent source of free value that can be harvested by the structures that supply the product or service. Typically a development as depicted in fig. 8.9 takes place. Fig. 8.9 depicts a typical life cycle of the way in which the supply of products develops. A single innovation leads to a development in which, after it has emerged, it goes through a phase of growth into maturity and a final phase of decay. This is indicated by the trajectory labeled 1 in fig. 8.9. The decay is often caused by the fact that a second innovation has taken place which, going through the same life cycle, replaces the existing one due to the fact that it is potentially more sophisticated. This development is labeled 2 in the figure. This process is repeated again when an additional innovation emerges. There exist a number of additional complications when the theory is applied to systems of a more involved complexity. The first one stems from the observation that, in many advanced systems, the information coding for the dissipative structure and the physical form of the dissipative structure are different entities. The information coding for the dissipative structure acts as a template or blueprint for the actual form of the structure. This leads to questions of cause and effect. What causes the formation of complex structures? Is it the information or is it the functional structure? This question becomes meaningless once the evolution has progressed towards a point where the information carrier and the functional structure have divorced. Once the cycle depicted in fig. 8.8 has been closed, the evolution becomes a true cycle and both the information set and the functional form are simultaneously selected. Both the information set and the functional structure are tested by the interaction with other structures and the environment. Both the environment and the structures become cause and effect. Also the concepts of chance and necessity appear in the theory. Evolution of increasingly complex structures is a necessity if the macroscopic branch becomes unstable. If the information coding for the structure becomes large the exact path the evolution takes is fundamentally unknown because of the vast number of possibilities. The path becomes unknown to an outside observer that has a reduced information picture; it is also unknown to the actors inside the system as they may have more information but can never obtain complete information. The divorce of the information and the functional structure is an example of a division of labor type of specialization. If these functions become separate better suited structures can develop. In fact division of labor is a characteristic of industries. A feature that is also of great interest is the balance between stability and complexity. One the one hand the more complex the system gets the number of possible innovations that challenge the existing global structure increases. On the other hand it can be shown that, certainly if the structure has aged, a large fluctuation in terms of competitive advantage is needed to displace the entrenched structure. The foregoing discussion has highlighted the intimate relation between the structures and the environment and vice versa. It showed that sustained evolution becomes inevitable in an environment in which sufficiently large sources of free value exist or can be created. This chapter will now be completed by briefly returning to biological evolution and its relation to elements of human society such as industries and economies. 8.10. Biological evolution: Dissipative structures and information processing. In the preceding sections we have discussed the origin of life and the appearance of molecules that, through some crude memory and replication capability, are able to sustain their nonequilibrium structure and to create and use sources of free value in the environment. Autocatalysis, self replication, information storage and competition were highlighted as key drivers. It has been shown that the organized entities have two very basic functions. Firstly, they 99
The non-linear free value transducer: Sustained evolution
contain the information for the replication of their own structure. The second one is that the structures translate the information into the vehicle allowing interaction and competition with other structures and other aspects of the environment. The first crude replicator molecules combined these functions. Quite early in the evolution of life on earth, bacteria trace back at least 3 billion years, the functional structure and the information carrier have become different chemical entities. These molecules have successfully discovered the beauty of cooperation in the quest for and development of sources of free value. In biological systems the role of information carrier and processor has been taken up by nucleic acid polymers of the DNA and RNA types. Microorganism, plant, animal and humans, i.e. the mind-boggling diversity in the biosphere on earth, are all products of the versatility of the genetic code and the translation process. The mechanism that leads to the sustained evolution of the biosphere rests in the creative power of the infidelity of the copying process when combined with competition for scarce resources resulting in selection. The imperfect copying of the code leads to a constant exploration of the vast diversity of structures that can be based on the coding mechanism. In this way new structures constantly appear and challenge the existing structure. The interaction with the environment, both in terms of resources as of other structures that compete with the structure, decides whether a mutant, infidel, copy replicates faster than the mother copy. If it happens to replicate faster, it will gradually but inevitably replace the mother copy and its functional structure. One could say that the codes are engaged in some gaming or experimentation process in which, by learning by doing, more optimal ways develop to take better advantage of the opportunities in the environment. An important aspect of the process described above is that the coding versatility of even a limited stretch of genetic information allows the creation of far more structures than can be tested in the life time of an evolution, even in the case where bacteria on earth appeared 3 billion years ago. The room for further evolution is therefore endless and new structures unexpected to the limited information observer continue to appear. The reproduction of the code is although of limited accuracy still fairly fidel. This means that the mutant species that are developed from the mother copy inhabit only a limited part of the space that contains all possible copies. The only change a copy has of being selected and replacing the mother copy is when it outperforms the mother copy. The likelihoods that this will happen is, given the rather high copying fidelity rather limited in a short time horizon, but it increases if longer times are considered. It is beyond the scope of the present work to present a complete picture of biological evolution, the interested reader is referred to specialized texts on the subject (Gould (1982)). In the way described directly above many of the species that are observed in the present day biosphere and that have existed in the past, largely disappear again. In this biological evolution the coding function of DNA was the main source of storing and progressing information. The divorce between the molecules active in storing and transmitting information and those involved in the embodiment of the functional structure proved to be by no means the only specialization trick the biosphere had up its sleeve. A new approach results with the invention of the brain that, e.g. in mammals, equips organism to store, process and transmit information by other means than the genetic information hardware in the nucleic acids. This allows organism to adapt their function and to learn beyond the limitations of the genetic hardware. This innovation fully developed with the appearance of the ancestors of humankind. After a while these humanoids developed a much larger brain than the species from which they evolved. Some 250-400 thousand years ago Homo sapiens appeared, with a brain size of about 3 times that of the earlier ancestors, such as Australopithecus afarensis. The human brain has greatly enhanced the possibilities to analyze and understand reality. It made new ways of storing and processing information available with the emergence of spoken and written language and later on 100
The non-linear free value transducer: Sustained evolution
computers. This revolutionized the so-called exogenic evolution of the human species and its society. The brain makes it possible to develop tools and machines and is instrumental in the creation of science and technology, also culture, arts and sciences and firms and economies, are products of this exogenic evolution. This resulted in new functional entities that are no longer part of the human body. Their contribution, however, to the competitiveness of the species is as real as the clutches and teeth of the large cats. It is a real part of evolution, just as the evolution embodied in the DNA molecules. The analogy can be extended further. The further evolution of our culture including markets and firms and science and technology follows the same general rules as the early stages of biological evolution, in fact these are an instrument of further biological evolution. Human culture is based on a new kind of dissipative structures. In addition to the information stored in our DNA, the information stored in our brains, the information stored in written forms, the information transferred by the spoken word, the information stored in computer systems are all part of the new information set on which competitiveness is based. This information forms the basis for the creation of new functional structures that create and exploit sources of free value in the environment. In fact we learn to harvest free value that, in principle, has been always available, but was inaccessible to the more primitive structures of the past. Also new ways of transmitting information have developed in teaching and in scientific publications, to mention prime examples. Also, for these more complex structures it is the competitive environment that shapes new more successful sets of information. Sets of information can now by perfected by learning by doing and scientific understanding and the resulting R&D activities that have become a hallmark of the academic world and modern industry. Science based research in industry emerged at the end of the 19th century and has increased in importance ever since. Our information about what it takes to optimally compete in free value space can never be complete and it is not possible to specify the information base that is needed to optimally compete with absolute certainty. A considerable amount of information is lacking as it is possible to access only a limited information set. There definitely exists a large uncertainty, i.e. significant statistical entropy characterizes our knowledge of the relevant reality. This leads to a situation in ZKLFKWDNLQJULVNVLVDQHFHVVDU\HOHPHQWRIVXFFHVV:HOLYHLQD³QRJXWVQRJORU\´NLQGRI reality. The important point that has been reached in our discussion in this and earlier sections is that biology as well as human culture exist of dissipative structure whose functionality is based on captive information that allows more or less successful competition for value in the environment that can be made available as free value to the more informed actors. In fact this also holds for industry structure. Our industries are dissipative structures that thrive on and develop information to effectively compete. This aspect will be revisited in the last two chapters of this book. 8.11. Conclusion. In summary, information is the prime resource that allows the creation of free value from sources of value in the environment. This is quantified in terms of the statistical entropy of the picture of reality the various actors have. It leads to a situation in which forces, resulting from asymmetries in information and different perspectives on free value, can be created or exist. This allows transformation and transaction processes to take place that lead to the generation, growth, maintenance and decay of dissipative, information processing, structures. Biological evolution has been discussed as a prime example and has been generalized to apply to large parts of human society. Limited fidelity copying, or alternatively phrased experimenting with the primary information code, shapes new dissipative structures that are better equipped to compete. New exogenic ways of storing progressing and transmitting information appeared and became a prime 101
The non-linear free value transducer: Sustained evolution
characteristic of the structures in human culture. Information and its transmission and perfection shows to be the prime competitive tool. The process of evolution is not, or only limitedly, goal oriented both at the level of the global environment and at the level of the individual actors. This certainly holds in the pre human biological evolution, but also applies to entities like firms. Firms can have and generally have explicit goals, but their fundamentally limited information leads to elements of uncertainty and ULVNWKDWSUHFOXGHVWULFWJRDORULHQWHGGHYHORSPHQW2QHFDQ³URDGPDS´DVWUDWHJy, but it is well known that a roadmap is of limited use whilst walking in a swamp. However, evolution will proceed in the direction of extracting more of the value globally available, in terms of useful free value. It has to be realized, however, that bounded rationality and the intrinsic characteristic of the dynamics of complex systems, may lead to loss of stability and ups and downs in the free value extracting process are characteristic of a system in which interrelated dissipative structures operate in a competitive way in an environment that also has its dynamics. An element of crisis and decay is as certain an aspect of sustained evolution as well as periods of sustained growth at the global level. We revisit these matters in the next chapter where the theory of competition and selection is developed further.
102
The non-linear free value transducer: Sustained evolution
8.12. Literature cited. Alchian, A.A. (1950), Uncertainty, Evolution, and Economic Theory, J. Political Economy, 58, 211-221 Alchian, A.A., H. Demsetz (1972), Production, Information Costs, and Economic Organizations, American Economic Review, 62, 777-795 Coase, R.H. (1937), The Nature of the Firm, Economica New Series 4 (16), 386-405 Darwin, C. (1872), The Origin of Species, Chapter 4, London, Reprinted by Collier Macmillan, London (1959) Dawkins, R. (1976), The selfish Gene, Oxford University Press, Oxford Dawkins, R. (1987), The Blind Watchmaker, Norton Co., New York Dopfer, K. (Ed.) (2005), The Evolutionary Foundations of Economics, Cambridge University Press, Cambridge (UK) Eigen, M., (1971), Self organization of Matter and the Development of Biological Macromolecules, Die Naturwissenschaften, 58(10), 465-523 Eigen, M. (1977), The Hypercycle, A Principle of Natural Self-Organization, Part A: Emergence of the Hypercycle, Die Naturwissenschaften, 64 (11), 541-565 Eigen, M. (1978a), The Hypercycle, A principle of Natural Self-Organization, Part B: The abstract Hypercycle, Die Naturwissenschaften 65, 7-41 Eigen, M. (1978b), The Hypercycle, A principle of Molecular Self-Organization, Part C: The Realistic Hypercycle, Die Naturwissenschaften, 65(7), 341-369 Glansdorff, P., I. Prigogine (1971), Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley Interscience, New York Gould, S.J. (1982), The meaning of Punctuated Equilibrium, and its Role in Validating a Hierarchical Approach to Macroevolution, In: R. Milkman (Ed.) Perspectives on Evolution, Sinauer, Sunderland (MA), 83-104 Hirshleifer, J. (1977), Economics from a Biological Viewpoint, J. of Law and Economics, 20, 152 Hirshleifer, J. (1976), Price Theory and Applications, Prentice-Hall, Englewood Cliffs, N.J. Miller, S.L. and L.E. Orgel (1974), The Origin of Life on Earth, Prentice Hall, Englewood Cliffs, N.J. Monod, J. (1971), Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology, Alfred A. Knopf, New York Nelson, R.R., S.G. Winter (1982), An Evolutionary Theory of Economic Change, Belknap Press of Harvard University Press, London Nicolis, G., I. Prigogine (1977), Self-Organization in Non-Equilibrium Systems, Wiley&Sons, New York Prigogine, I. (1980), From Being to Becoming, Freeman, San Francisco Prigogine, I., I. Stengers (1984), Order out of Chaos, Bantam Books, New York Roels, J.A. (1983), Energetics and Kinetics in Biotechnology, Chapter 6, Elsevier Biomedical, Amsterdam Schrödinger, E. (1945), What is Life?, Cambridge University Press, Cambridge Williamson, O. (1975), Markets and Hierarchies: Analysis and Antitrust Implications, Free Press, New York
103
CHAPTER 9. COMPETITION AND SELECTION: THE FIRM AND INFORMATION PROCESSING. 9.1. Introduction. The preceding chapters identified the forces driving growth and decay of dissipative structures. Firms and other economic institutions are examples of such structures. These structures take advantage of and create sources of free value in the environment and couple to those forces in an increasingly effective way. This chapter develops a quantitative formalism for competition and selection. This allows closing this chapter with a discussion of the nature of the firm and the correspondences and differences between biological and economic evolution. First we introduce an approach largely borrowed from developments in physics of some 30 to 40 years ago. It derives from the work of Eigen (1971, 1977, 1978a, 1978b). The approach of Eigen cannot be directly applied to firms and markets. We mainly introduce it to show some interesting features of evolution under competitive pressure. These general features lead to observations that also apply to economic systems. Then we introduce an approach based on VTT using the concept of the linear free value transducer as developed in chapter 6. The treatment in the first four sections closely follows the work of Eigen. 9.2. The dynamics of competition and selection. Eigen introduces the notions of metabolism, reproduction and mutability. These features lead to the development of dissipative structures and support their sustained evolution. The work of Eigen refers to the evolution of biological macromolecules and their corresponding biological structures. This does not directly apply to socioeconomic system. The information storage and processing mechanisms are completely different as well as the way this information is translated into structures that compete in the environment. Metabolism expresses the need for a source of free energy. In our terminology this expresses the QHHGIRUDVRXUFHRIIUHHYDOXH,Q(LJHQ¶VZRUNLWLVDVRXUFHRIWKHPRQRPHUVPDNLQJXS'1$ or RNA. Eigen describes the competition of the resulting macromolecular species for those monomers that are supplied at a limiting rate. This scarcity of resources is a necessary feature; evolution only takes place if scarcity of resources exists. Reproduction reflects the fact that genomic macromolecules reproduce themselves into new copies that build new organisms. As discussed in chapter 8 in biology there is a clear separation between the information processing function of DNA and RNA and the biological structures, largely based on proteins Mutability refers to the copying fidelity of information sets. Copying is not perfect and it changes the information sets in a random way. These limitations to the copying fidelity are also a prerequisite for evolution, if copying is perfect evolution comes to a halt. Darwin (1872) is credited for the development of a model explaining biological evolution on earth. His model introduces natural selection and survival of the fittest. These mechanisms are basic for the definition of Darwinian evolution. This section discusses the prerequisites for such a development and its consequences. The terminology of Eigen introduces, as said, the notions of metabolisms, self reproduction and PXWDELOLW\ ,Q GLVFXVVLQJ (LJHQ¶V DSSURDFK ZH ZLOO NHHS the mathematics to the minimum required for the purpose of this book; the reader is referred to the original literature for additional detail. Metabolism expresses the need for a source of free energy, or in our broader interpretation free value, that can be exploited by the structures. In some respect Darwinian systems are intermediate between value at a low level of statistical entropy and value at a high level of 104
Competition and Selection: The firm and information processing
statistical entropy. The resulting force due to the gradient in free value, allows the dissipative structures to couple to this opportunity to maintain and grow their structure against the forces of the second law. As we stated it has been Schrödinger (1945) who highlighted this as a characteristic of living systems. Self reproduction reflects the fact that dissipative structures in the non-linear range are frequently based on storing and transmitting the information that underlies their structure and their ability to compete for sources of free value. As the structures are intermediate between a source of high free value and a sink of low free value the structure specific information cannot be perfectly stable, it is degraded by the very forces that allow its creation and further development. Some form of autocatalysis is needed to fight these degrading forces. Mutability refers to the fact that perfect fidelity copying of information does not apply. These reproduction errors are a prerequisite for sustained evolution. Errors that provide new information sets are the mechanism that tests information sets against the forces of competition. Finally, scarcity of resources is necessary for competition to drive evolution. The conditions mentioned above are not only necessary for evolution, but also sufficient for selforganization and further evolution. Evolution becomes inevitable. Note 9.1. A first feeling for the dynamics of this type of evolution results if we analyze a very HOHPHQWDU\H[DPSOHRIDV\VWHP7KHV\VWHP¶VLQIRUPDWLRQVHWFRQVLVWVRIVL[GLJLWVWKDW can have a value between zero and six. Initially the six numbers are set equal to zero. The survival value of the system, i.e. the value that is the object of the selection, is the sum of the six digits. It can vary between 0 and 36. After each basic time interval the information set of the system is reproduced. A fraction q of the systems symbols reproduces exactly. In a fraction 1 q the digit is not reproduced correctly and a random number between 1 and 6 is inserted.
)LJ5HVXOWVRIVLPXODWLRQRI³'DUZLQLDQVHOHFWLRQ´XVLQJDQHOHPHQWDU\PRGHO This process can be mimicked by e.g. the throwing of a dice. After each reproduction F\FOHWKHFRS\¶VVXUYLYDOYDOXHLVFRPSDUHGWRWKDWRIWKHPRWKHUFRS\,IWKHSHUIRUPance of the mother copy equals or is better than that of the derived copy it is retained. If the derived copy is superior it is retained. This represents a very sharp selection for 105
Competition and Selection: The firm and information processing
improved copies. Fig. 9.1 shows the average of ten experiments for two values of q . Average performance is shown as a function of time. This admittedly very simple model has interesting features. Firstly, the curve shows some resemblance to the life cycle curve that was discussed in e.g. chapter 1. Particularly the growth and mature phases seem to be well mimicked by our model. Also the apparent goal orientation of the combination of random mutation and sharp selection is clear. This was discussed in section 8.6 as a feature of evolution of complex information based systems. The performance steadily increases. This is very different from the case in which no selection takes place; in that case the performance fluctuates randomly around about 0.6. Finally the model shows that evolution takes place more quickly if the error rate is higher. This obviously makes sense; more different copies are tried per unit time if the error rate increases. However, if the error rate increases a limit is reached, beyond which different features emerge. This is the phenomenon of the error threshold. We discuss this threshold in section 9.4. 9.3. The dynamics of selection in Darwinian systems. As Eigen shows, the following differential equation represents a simple system having all requirements for Darwinian selection: dN i dt
Ai Qi N i mi N i ¦ M il N l I0 N i
(9.1)
l zi
In eqn. 9.1 N i is the number of copies of information set i, i.e. the number of molecules having one defined DNA code. The first term at the right hand side of eqn. 9.1 represents the rate of self instructed formation of the information set i. The process is first order in the number of copies of the information set; this reflects the autocatalytic characteristics of the process of reproduction. Ai is a rate constant. Qi represents the quality of the copying process, it is the fraction of the copies exactly matching the mother copy; it has a value between zero and one. It reflects the mutability; the set is not copied perfectly. The second term at the right hand side reflects loss of copies of set i by degradation processes. This process results in waste that is not useful. The rate of degradation is proportional to N i WKH ³PDLQWHQDQFH´ FRHIILFLHQW EHLQJ mi . The summation term at the right hand side reflects the production of set i by infidel copying of other related sets of information. Finally, the last term at the right hand side accounts for the removal of copies of set i by processes other than degradation, e.g. by transport from the system to the environment. This term depends on the way the system is interacting with its environment. Here, following Eigen, we assume that the total amount of information is constant independent of the specific content of the set. This leads to:
I0
¦(A
k
mk )
(9.2)
k
Note 9.2. The following reasoning directly leads to the derivation of condition 9.2. The production of copies of information set i through inaccurate copying of related information sets, i.e. the third term at the right hand side of eqn. 9.2 results as:
106
Competition and Selection: The firm and information processing
¦M
il
N i Ai (1 Qi ) N i
l zi
On substitution of this result in eqn. 9.1 we get: dN i dt
Ai N i mi N i I0 N i
If we make a summation of al terms at the left hand side the above equation, that is if we count up all changes of the numbers of information sets, we must get zero because we assumed constancy of the total amount of information.. This can only apply if we have:
¦ ( A m ) i
i
I0
i
Changing the summation index from i to k, to which the sum is of course invariant, results in eqn. 9.2. Eqn. 9.1 can also be written as: dN i dt
( S i E (t )) N i ¦ M il N i
(9.3)
l zi
In eqn. 9.3 S i is the intrinsic selection value of information set i, given by: Si
Ai Qi mi
(9.4)
E (t ) is the time dependent average excess production of the collective information sets, it is defined as:
¦E N ¦N k
E (t )
k
k
(9.5)
k
k
With E k the rate of overproduction of species k, defined as:
Ek
Ak mk
(9.6)
The system of eqns. 9.3 can be solved in a closed form if Ai , Qi and mi are constants. The solution of the system of equations is facilitated by the introduction of the concept of the quasispecies. The amount Yk of the quasi-species k is a convenient linear combination of the amounts of distinct but closely related sets of information:
Yk
¦O
ik
Ni
(9.7)
i
107
Competition and Selection: The firm and information processing
The quasi-species is a special linear combination of the amounts of individual sets of information i, it multiplies at a time independent specific rate P i . The dynamics of the development of the amount of the quasi-species is given by: dYi dt
( P i E (t ))Yi
(9.8)
The solution of the system of eqns. 9.8 leads to a straightforward result. Any quasi-species with a specific rate of multiplication P i below the threshold E (t ) will decay and gradually disappears; only species with a rate of multiplication higher than the threshold grow in importance. This results in an increase of the threshold E (t ) until it reaches the maximum E max . . The quasi-species, that finally remains is a cloud of closely related species which marginally differ from the species with the maximum intrinsic selection value, S i ,max . . Eigen shows that the following approximation holds or large information sets: Emax . | S i ,max .
(9.9)
The following constraint applies: S i ,max ! Eav.,k zi ,max .
(9.10)
In which the right hand side of the inequality is given by eqn. 9.5 with i, max . excluded from both summations at the right hand side. The constraint 9.10 can also be written as:
Qi ,max . !
1
V i ,max .
(9.11)
Where
V i ,max .
Ai ,max . m i ,max . E av.,k zi ,max .
(9.12)
is the superiority function of the dominant species.
Note 9.3. The inequality 9.11 derives as follows. First we write eqn. 9.4 for index i,max.: S i ,max .
Ai ,max .Qi ,max . mi ,max .
Combining this with 9.10 results in Ai ,max .Qi ,max . mi ,max . ! Eav,,k zi ,max .
108
Competition and Selection: The firm and information processing
Elementary rearrangement leads to: Qi ,max . !
E av.k zi ,max . mi ,max . Ai ,max .
By inspection this proves equivalent to the combination of eqns. 9.11 and 9.12.
In summary the analysis of Eigen leads to the following results. Darwinian selection leads to a final situation in which the reproduction rate of the most competitive copy of information largely defines the reproduction rate of the population that is present. This reproduction rate is defined at the level of the phenotype. In biology it is not the information set that directly competes but rather the phenotypic translation of that information set. What is finally obtained is not the most competitive information set alone. In fact the total fraction of the most competitive copy in the population that is finally selected is relatively small if the information set is large. In most cases WKHSRSXODWLRQZKLFKUHVXOWVH[LVWVRID³FORXG´RIFORVHO\UHODWHGLQIRUPDWLRQVHWV7KHUHPD\ however, also result situations in which largely different information sets with comparable properties at the level of the phenotype, i.e. information sets with the same selection value, coexist in the final situation. This leads to a situation where the population can flexibly respond to changes in the environment because information sets that are better adapted to the new FRQGLWLRQV PD\ DOUHDG\ H[LVW LQ WKH ³FORXG´ WKH TXDVL-species, and quickly become more dominant if the environment so requires. If, as an example, the information set contains 4500 symbols and the copying fidelity is .9995, whilst the superiority function of the most successful copy is 200, the relative abundance of the most competitive copy is only 10%. Thus 90% of the population is made up of copies that contain at least one symbol different from the ideal copy. A few additional remarks will be made regarding the dynamics of Darwinian evolution. Given the right conditions, i.e. if the prerequisites for evolution as discussed in this chapter are fulfilled, evolution towards a maximum reproduction rate is inevitable, it is a necessity. The actual path the evolution takes is, especially in the case of large information sets, subject to chance and has an important historical dimension. Situations may develop in which the evolution apparently comes to a halt at a level of the reproduction rate that does not correspond to the optimal situation; the information set may become stuck in a local optimum and only relatively large fluctuations are able to drive it away from that local optimum to a more remote higher reproduction rate optimum. In this way the concept of sustained evolution is regained. Also, it has to be taken into account that, again for large information sets, it may take a long time before the final situation is reached and in that period the environmental conditions that influence the selection value of an information set may have changed to favor another information set. It is also true that once evolution has resulted in a sufficiently sturdy local optimum, it is difficult for innovative information sets to displace the quasi-species characteristic for the local optimum. There is a definite first mover advantage. We have kept the mathematical treatment of the Eigen model to the bare minimum required; the reader is referred to the original literature for more detail. 9.4. Information content and the error threshold in evolution. In the last section we discussed the dynamics of evolution of information sets and related phenotypes using the approach of Eigen. The analysis can also be used to derive an important constraint to the copying fidelity of a set of information in relation to the amount of information that can be maintained and transmitted by that set. Assume that the information set exists of Z k 109
Competition and Selection: The firm and information processing
symbols, e.g. DNA-bases. The average information content of a symbol is assumed to be b . In the example of DNA where 4 bases exist, the information content per base would be 2 if measured in bits. The amount of information of the information set I k follows as: Ik
Zk b
(9.13)
To study the process of reproduction we assume that the probability of one symbol being correctly reproduced, i.e. its reproduction fidelity, is q r . In that case the probability Qk that the entire message is reproduced correctly is given by: Qk
Z
q k r
(9.14)
In biology the maintenance and growth of a dissipative structure requires continuous copying of the information set on which the competitive value of the structure depends. This requires the original message to be stable to competition with the other sets i in the environment. This brings us back to the requirement for selection of the message, it was derived in section 9.3, eqn. 9.11:
Qi ,max . !
1
V i ,max .
(9.11)
If eqns. 9.11 and 9.14 are combined it follows:
1 q Zk ! r V i ,max .
(9.15)
This inequality is approximated by:
Zk
ln V i ,max . 1 qr
(9.16)
Note 9.4. Inequality 9.16 is derived as follows. Taking the natural logarithm of both sides of eqn. 9.15 results in:
Zk ln qr ! ln V i ,max . This can be written as:
Zk ln(1 (1 qr )) ! ln V i ,max . In practical cases q r is close to unity and hence 1 q r is small. In that case we can take advantage of the following approximation: ln(1 x) | x
110
Competition and Selection: The firm and information processing
Application of this approximation to the first term of the inequality above leads to:
Zk ln(1 (1 qr )) | Zk (1 qr ) If this is substituted in the last inequality above it follows: Zk (1 qr ) ! ln V i ,max .
Or:
Zk
ln V i ,max . 1 qr
The inequality in 9.16 provides an intuitively logical restriction to the error rate that can be tolerated if the objective is to maintain and progress a stretch of information of a given size. Making too many errors causes the message to lose its coherence. The limit is seen to rather weakly depend on the superiority function V i ,max . ; it is present as a logarithm. However, the dependence on the copying fidelity q r is very stringent. For ease of discussion we assume that the logarithm appearing in 9.16 is one, i.e. the superiority IXQFWLRQRIWKH³EHVW´LQIRUPDWLRQVHWHTXDOV,QWKDWFDVHDFRS\LQJILGHOLW\RIZLOODOORZ maintaining a message of maximum 100 symbols. If we add two nines to the copying fidelity, i.e. assume it to equal .9999, a message of 10,000 symbols is just maintained. Thus maintaining the genetic information of E. coli, with a genome of 4. 106 symbols, requires an error rate below 2.5. 10-7 . If we consider the human genome of 3. 109 DNA bases, an extremely high copying fidelity is required. This observation results in an important feature of the development of dissipative structures that are based on a large set of information. Computer experiments on maintaining a given copy of information under a selective pressure result in the following picture. If the length of a message and its copying fidelity result in a size beyond the limit given in 9.16, the message will quickly disintegrate; the inforPDWLRQ³PHOWV´DZD\7KLVLVDOVRWKHFDVHLIWKHWDUJHWPHVVDJHLVLQLWLDOO\ the only one present. If the copying fidelity exceeds the limit a totally different picture emerges, the mixture starts to evolve quite quickly to the sequence with the highest superiority function and its closely related companions in the quasi-species. The optimal quasi-species emerges as dominant even if initially not present at all. The speed of evolution increases with decreasing value of the copying fidelity unto the limit prescribed by inequality 9.16. If the copying fidelity gets lower than the limit the optimal quasi-species does not emerge or, as aid, even melts away if it was initially dominantly present. The process of evolution under a selective pressure shows to be very effective. If we return to the example of a six digit code and a copying fidelity of 5/6, a stringent selective pressure will result in an optimal code of 6 sixes after 15-20 attempts on the average. If no selective pressure exists it takes on the average more than 45000 attempts before the correct sequence is obtained. The ³OHDUQLQJE\GRLQJ´EHKDYLRULVWKXVVHHQWRSD\RXW Analyzing the strategies that apparently evolved in biology leads to the following picture. In nature the copying fidelity seems to be minimized to very close to the limit required to maintain the genome given its size. Thus RNA-based viruses (phages) with a genome size of 1000-10,000 bases allow an error rate of close to .001 to .0001, the prokaryotic bacteria with a typical genome size of 5.106 allow an error rate of the order of .0000001. This observation can also be rephrased 111
Competition and Selection: The firm and information processing
in a different way. The copying fidelity of the information set of an organism determines the genome size that can be maintained. The trend in evolution to higher genome sizes required the invention of increasingly sophisticated copying mechanisms. One can speculate that, when the genome size of the higher animals and the humanoids was reached, the further expansion of the information set on which competitiveness was based involved another strategy. The function of the DNA based information storage and transmission was substituted by the capabilities of the brain. Part of the information storage and transmission divorced from the DNA molecules and exogenous information appeared on the evolutionary stage. Mechanism such as language, writing, science and education, academic and industrial research took their role as information storage and processing mechanisms that complement the role of DNA. Firms and markets are examples of information processing organizations and some of the rules on transmission of information subject to error (or deliberate experimentation) that appeared in this section apply to these institutions. 9.5. Models of Darwinian systems: The Hypercycle. This section analyzes more complex systems engaged in replication and subject to selection through competition. These more complex systems exhibit features that appear in structures such as organisms, markets and industries. Evolution on earth resulted in complex chemical machineries in which a separation has been realized between information and function. Complex information sets, RNA or DNA molecules, are responsible for storing and progressing information. Protein structures result from the translation of this information into the complex biological machinery of the organism. At the level of these functional structures competition exists and its results feed back into the information carriers. This leads through a process of mutation and selection to increasingly competitive structures. As this discussion reveals, the information and its functional structure interact to form a closed cycle. We want to reemphasize why such separation of information and function took place. The reasons are manifold but one of the most basic reasons is that the coding and reproduction functions at the genome level are very different from the competition function of the functional structures. It is unlikely that one and the same molecular species optimally combines these functionalities. Alternatively phrased, separation of these functionalities leads to new sources of competitive advantage and it inevitably appears in the competition for scarce resources. A detailed discussion of this matter is outside the scope of this book. The reader is referred to the pertinent literature for a more complete account. Of course, separating the functions in different entities leads to coordination problems, e.g. the various entities need to be produced and to interact in a concerted fashion. In biology the separation of functions is observed to be necessary when the structures become very complex and large chunks of information are needed for their instructed creation. In functional biological systems this has led to types of orgaQL]DWLRQ WKDW FDQ EH PRGHOHG E\ ³+\SHUF\FOHV´ 7KHVH functional entities have many interesting characteristics that will be briefly discussed below. The work of Eigen presents a detailed discussion.
112
Competition and Selection: The firm and information processing
Fig. 9.2. A schematic representation of a Hypercycle Fig. 9.2 provides a schematic representation of a simple Hypercycle. It is reproduced from the work of Eigen. The hypercyle consists of a number of information sets I i . These entities carry only a part of the total information set of the system. The information content of these information sets is below the error threshold that results from the fidelity of the copying mechanism that is used by the entity. Hence its conservation against error copies is guaranteed, of course with the exception of copies that increase the competiveness of the structure. The information sets have a self-reproduction capability indicated by the open circles in fig. 9.2. The functional structure that is encoded by the information carrier directly preceding it catalyzes this reproduction process. An important feature of the Hypercycle is that it is closed, i.e. the product of the last information set in the system catalyses the synthesis of the first one. If the cycle is not closed the structures in the Hypercycle do not cooperate rather they compete and are not engaged in concerted action. Admittedly, fig. 9.2 represents a very straightforward example of a Hypercyle. Much more complex varieties, showing branches and other complexity enhancing features, are possible. Hypercycles have at least the following properties: 1. The overall Hypercycle exhibits autocatalytic growth. The elements of the Hypercycle grow in a concerted fashion if sources of free energy, or free value, in the environment allow this. This is one of the requirements for sustained evolution. Different hypercycles engage in competition for scarce sources of free value; they exhibit Darwinian competition and survival of the fittest. 2. Hypercyles show highly non-linear kinetics leading to strong selection behavior. A hypercyle is in addition, once it has established itself, resistant to substitution by other emerging cycles. 3. Its strong selection behavior allows it to evolve very quickly and to establish and exploit small differences in selective advantage. It is very effective in improvement through learning by doing once it establishes itself as a closed loop. 4. The cyclic arrangement allows the system to use vastly more information than would be consistent with stability in the light of the fidelity of the copying mechanism it is using (see section 9.4). The hypercyclic cooperation allows escaping the fidelity limit. 5. The system selects against so-called parasitic branches, i.e. branches that have become attached to the hypercycle and replicate with it, but do not contribute to its competitiveness. Also, parts of the cycle that cease to be functional and do no longer 113
Competition and Selection: The firm and information processing
contribute to the competitiveness of the overall cycle, are automatically removed if this results in increased competitiveness. 6. There is an advantage for the system to escape into a closed compartment. In this way it can evolve and use pieces of information to which its competitors do not have access. This also results in protecting itself against pieces of information that have evolved elsewhere and pollute the cycle. We recognize a known feature of organisms that have developed cell walls and membranes. It also is a feature of human organizations, such as firms, that generally have defined and well policed interfaces with the environment. In such organizations restrictions exist to the exchange of materials, resources and information with the environment. 7. 7KH V\VWHP LQVLGH WKH FRPSDUWPHQW PD\ ³LQGLYLGXDOL]H´ E\ OLQNLQJ WKH FRGLQJ information in one chain, just as in organisms. This of course takes away part of the copying fidelity related potential advantages. 8. Individual hypercycles do not cooperate, rather they compete. They may, however, be linked together by coupling resulting in larger functional entities in which two or several Hypercycles cooperate. Their cooperative rather than competitive behavior critically depends upon the strength of coupling between the Hypercycles. This mimics processes of fusion and alliance in industries. The author is well aware of the fact that Hypercyles, although rather complex from the mathematical point of view, present a highly schematized and simplified picture of the reality of markets in which industries compete. He hopes, however, to have illustrated the richness of behavior of these mathematical abstractions. This will be one of the subjects of the discussion in the last chapter of this book. 9.6. Competition and selection: An approach based on VTT. )RUVLPSOLFLW\¶VVDNHZHGLVFXVVFRPSHWLWLRQXVLQJWKHOLQHDUUDWKHUWKDQWKHQRQ-linear free value free value transducer. We discussed the theory of the linear free value transducer in chapter 6. Staying with the linear transducer makes the mathematics easier and it will still lead to relevant conclusions. For the reader convenience the next note again summarizes the main features of the theory relevant to this section. Note 9.5.
J1
X1
Free Value
X2
Transducer
J2 Fig. 9.3. A free value transducer Consider the system in fig. 9.3. A system is shown that is in a steady state and acts as a free value transducer (Chapter 6). Input in the transducer is a source of free value which is transformed in a transaction 114
Competition and Selection: The firm and information processing
process characterized by a decrease of free value creating a force X 1 (Chapters 5 and 6 introduced the notion of the forces based on VTT). The rate of flow is assumed to be J 1 . The output of the free value transducer is an uphill process characterized by a force X 2 and a rate J 2 . The input force is as an example derived from the need for a product in the market. The free value transducer couples to that force to drive an uphill process in which free value is generated. The free value transducer is the firm that produces the product that satisfies the market need. The following phenomenological equations, which provide the relation between flows and forces, describe the transducer:
L11 X 1 L12 X 2 L12 X 1 L22 X 2
J1 J2
For convenience we introduce a number of concepts (Chapter 6):
L22 L11
Z
L12
q
L11L22
In which Z is the phenomenological stoichiometry and q is the degree of coupling. As said the force X 2 is negative as the output of the transducer is an uphill process. An interesting quantity is the power output P ; it represents the value production per unit time at the output side of the transducer. This translates into a measure for the profit the firm derives from coupling to the force X 1 , i.e. from supplying products to serve the market need. The power output follows as: P
L12 X 1 X 2 L22 X 22
J 2 X 2
Introduction of the definitions of q and Z in the last equation results in: P
L11 X 12 x(q x)
In this equation we also introduced the force ratio x , it is a quantity constrained between 0 and -1, given by:
x
Z
X2 X1
We revisit the degree of coupling q ; it represents the effectiveness of the firm in coupling to the force created by the need in the market, i.e. the quality and effectiveness of its product. The degree of coupling is constrained between zero and one by virtue of the second law of VTT. We now optimize the force ratio of the free value transducer to achieve maximum power 115
Competition and Selection: The firm and information processing
output given the degree of coupling of the free value transducer. Differentiation of the expression for power output with respect to x and setting this differential equal to zero, the optimum value of the force ratio at which power output is maximized xopt. , calculates as q / 2 . The optimum power output, Popt. results as: Popt .
1 L11 X 12 q 2 4
In chapter 6 and Note 9.4 we derived the expression for the power output of an optimized linear free value transducer:
Popt.
1 L11 X 12 q 2 4
(9.17)
A company can derive a higher profit from a need in the market in two distinct ways. The fist one is straightforward. It develops an information set that allows it to couple more effectively to the source of free value, i.e. it realizes a higher the degree of coupling. The other situation is slightly less obvious; the competing companies may differ in the statistical entropy and/or the costs of information and hence create a different value of the force X 1 . Also these latter differences lead to increased competitiveness and hence potentially higher profit. Here the source of competitiveness results from an information advantage due to a superior genome, or broader, a superior information set. We introduce an elementary learning by doing system. It is the linear free value transducer adapted to achieve a maximum power output given its degree of coupling. Hence, its power output, or competiveness is given by eqn. 9.17. We assume that the nature of its information set allows it to develop a maximum degree of coupling qmax . by further optimizing the set, either purposefully or through learning by doing. Initially the degree of coupling is zero. We also assume that the rate of increase of the degree of coupling is proportional to the difference between the maximum value of the degree of coupling and its present value, mathematically: dq dt
k (qmax q)
(9.18)
qmax . (1 e kt )
(9.19)
This results in: q
In eqns. 9.18 and 9.19 k is the rate of evolution of the information set. Combining eqns. 9.19 and 9.17 we obtain the time evolution of Popt. , i.e. the time evolution of the competiveness of the information set:
116
Competition and Selection: The firm and information processing
Popt. (t )
1 2 kt ) 2 (9.20) L11 X 12 q max .. (1 e 4
Fig. 9.3. Life cycle characteristics based on VTT approach. The x-axis shows time, 4 Popt. (t ) the y-axis . L11 X 12 Fig. 9.3 shows a graphical depiction of eqn. 9.20 for q max . equal to one. We obtain the familiar life cycle of an industry based on this elementary model. Consider two free value transducers, firms, competing for one source of free value defined by the force X 1 . The difference in their initial information sets gives them different values of qmax . and rate of evolution k . We furthermore assume that they are allowed to evolve freely until the power output of a free value transducer optimized at a degree of coupling of one is reached as the sum of the outputs of the two free value transducer, i.e. until the summed 1 output of the two free value transducer is equal to L11 X 12 . When that value is reached the 4 power output of the competing free value transducers is shared according to the ratio of their squared degrees of coupling at the time considered. Note 9.5. The structure of the proposed model is as follows. We have two free value transducers with a maximum degree of coupling of q1,max and q 2,max respectively. Their learning rates are k1 and k 2 . The power outputs P1 and P2 are obtained from eqn. 9.20 by 1 substitution of the relevant constants. As long as P1 P2 d L11 X 12 the power outputs 4 are given by eqn. 9.20. If the sum of the normalized power outputs, i.e. the power outputs divided by the maximum at a degree of coupling of 1, exceeds 1 the power 117
Competition and Selection: The firm and information processing
outputs are set equal to:
P1
q12 1 ( L11 X 12 ) q q22 4
P2
q22 1 ( L11 X 12 ) q q22 4
2 1
2 1
Fig. 9.4. The development of the normalized power outputs of two competing free value transducers. x- and y-axis defined as in fig. 9.3. For species 1 the values qmax . =0.8, k=0.1 apply, for species 2 these values are 1 and 0.25 respectively. 7KLVLVDQDGPLWWHGO\FUXGHPRGHORIFRPSHWLWLRQEXWLQWKHDXWKRU¶VYLHZLWVHUYHVWRLGHQWLI\ the types of behavior that we can expect. As we discuss later on developing more elaborate general, i.e. not case specific, models is not useful anyhow. Figs. 9.4 and 9.5 show the results of some simulations based on the model. Fig. 9.4 shows that the faster evolving information set with a higher potential degree of coupling quickly grows its share of the market and starts to feel the pressure of competition after growing through a maximum in profitability. It remains the dominant player.
118
Competition and Selection: The firm and information processing
Fig. 9.5. As in 9.4. In this case species 1, qmax . =0.8, k=0.25; species 2, qmax . =1, k=0.1. In fig. 9.5 the faster evolving set with a lower potential degree of coupling is seen to be dominant in the early stages of the evolution of the market. Later on the slower evolving set with a higher potential degree of coupling becomes dominant. These patterns resemble situations observed in real life markets. We can tentatively explain the picture in fig. 9.5 as follows. The quickly evolving set is a smaller information set that allows a lower copying fidelity and hence more variation in its information set. It learns more quickly. The size of the set, however, does not allow it the reach the maximum degree of coupling of one. The second set is a larger information set with a higher copying fidelity requirement. It evolves slower, but has an amount of information allowing it to reach the maximum achievable degree of coupling of 1. 9.7. The nature of the firm and its evolution. We concluded that firms are dissipative structures beyond the strictly linear region of value transaction. Firms evolve both as a consequence and a cause of the existence of sources of free value. In this respect firms are a generation of dissipative structures beyond biological organisms. Firms develop, store and process information beyond the limits of the DNA macromolecules in strict biological evolution. Organizations are an inevitable product of sustained evolution in the non-equilibrium environment provided by the biosphere. They emerge and grow or decay coupling to free value forces, which partly emerge as a consequence of their activities. Competition for scarce sources of free value feeds back into their information set, which more and more adapts to optimally operate under the conditions in the environment or they decay and are substituted by more adapted information sets that emerge in evolution. Markets and industries, and an industry structure, do not exist but are created and evolve. Firms are created because of a non-equilibrium situation that they serve to maintain and grow. A second condition for organization exists in non-linearities in the interaction with the environment and with other firms. Autocatalytic behavior in which an entity enhances its own growth is an example. In addition there is the need for an information storage and processing system. Firms are basically information processing structures that contain the information needed to localize or create sources of free value in the biosphere and to produce and sell the products on 119
Competition and Selection: The firm and information processing
which their competitiveness rests. Finally, the reproduction of the information set needs to be subject to error or experiment to constantly test the effectiveness of the operations and the products that characterize the firm. This reminds of the dynamic capabilities approach to the firm as summarized in Douma and Schreuder (2008). These authors stress that operational capabilities, that is the information set a company has at a certain moment in time, are changed by goal orientated changes by the management of the firm. Here we maintain that the distinction EHWZHHQ HUURU DQG GHOLEHUDWH H[SHULPHQW LQ FKDQJLQJ WKH LQIRUPDWLRQ VHW RQ ZKLFK WKH ILUP¶V competitiveness rests, is only gradual. Reality is too complex to be grasped in detail and there exists only a reduced information picture of reality, characterized by a significant statistical entropy. There can only be bounded rationality in the changes the firm makes, by, as an example, the R&D based development of new technologies to supply new products and develop new processes. It is the competition and the interaction with the environment that decides whether the changes that management introduced result in the goals pursued. Bounded rationality applies to the adaptation of the information set and environmental selection provides the arrow to the evolution of firms and industries. This dynamic evolution does not lead to any social optimization of resource use; the system is optimized towards maximum growth of individual firms. This strongly resembles the results of contemporary evolutionary theory (Nelson and winter (1982), Nelson (1987), Douma and Schreuder (2008), Beinhocker (2007). The present work provides a sound mathematical framework to build an evolutionary economic theory. What is the nature of firms? Firstly, we define the market it operates in. The firm produces products that supply a need in the market. This need translates into a force in VTT. Through its products the firm couples to that force and derives free value (profit) to fuel its operations. This can be described in terms of the non-linear free value transducer we discussed in chapter 8. There exist other firms that try to couple to the same need in the market either with the same product or a different one that supplies the functionality to satisfy the need. These firms are the competitors LQWKH³QLFKH´RIWKHILUP Its information set is the most basic characteristic of the firm. It consists of information for the SURGXFWLRQRILWVSURGXFWVDQGPDUNHWLQIRUPDWLRQWRGLUHFW5 '¶VVHDUFKIRUQHZSURFHVVHVDQG products. It contains technological competences, organizational procedures, HR-policies, the knowledge of the people employed by the firm, its public affairs approach and many things more. The information set defines the operational characteristics of the firm. In addition the information set contains the processes to change the information set, e.g. through business planning and corporate planning. Changes induced in the information set, can be either purposeful, e.g. as a reaction on the experience with its products in the environment, by error or by the fact that these capabilities contain tacit knowledge that is difficult to grasp and copy. The interaction with the environment determines the direction of development of the information set of the firm. In the biological analogue the inIRUPDWLRQVHWLVWKH³JHQRPH´RIWKHILUPEXWLWLV not stored and processed by nucleic acid based macromolecules. The products the firm derives from its information set are instruments to couple to the need in the market and to extract free value and these are the direct basis of its competitiveness. There are EH\RQGWKHSK\VLFDOSURGXFWVDGGLWLRQDO³SURGXFWV´RIWKHLQIRUPDWLRQVHWVXFKDVWKHLPDJHRI the firm, its perceived financial solidity and its marketing activities, that engage in the competitiRQ LQ WKH PDUNHW 7KHVH FROOHFWLYH DVSHFWV RI WKH ILUP¶V FRPSHWLWLYH VWUHQJWK DUH WKH ³SKHQRW\SH´ LQ ELRORJLFDO WHUPV 7KHVH SURGXFWV FRPSHWH DQG SURYLGH IHHGEDFN WR GLUHFW WKH evolution of the information set. The products in the broad sense defined above define the competitiveness of the information set and drive competition and selection. Not all the elements of the information set need to be internal and captive to the firm. If it has access to other information sets, it can integrate these with its own set and the synergy involved may still lead to an increased competitiveness. Even freely available information (e.g. in the public domain through academic publications) can, if integrated with the firms information set, 120
Competition and Selection: The firm and information processing
lead to an increased competitiveness. This latter point, the information set as a whole defines competitiveness, is very important. If we look at the genome of an organism it consists of, in the perspective of the whole genome, small pieces of information called genes. These code for a certain functionality at the protein level. The collective proteins define the activities and shape of organisms in a complex way; the whole is far more than the addition of the activities of single genes. A gene may decrease competiveness if it is inserted in one set of genes; it may improve competitiveness in another set of genes. The same holds for the information set of an organization such as a firm. It is the whole information set that determines whether the addition of a piece of information, freely available or captive to the firm, will be instrumental to increase competitiveness. In evolutionary approaches to organizations it appears that the resistance to changing information sets is large; there are inertia (Douma and Schreuder (2008), chapter 10) that result in resisting changes. This resembles the copying fidelity threshold we discussed in Section 9.4. If too frequent and too large changes in the information set take place, it does not develop in the direction of increased competitiveness. Rather a process of melt-down of the information set occurs. There is no longer an arrow of evolution. In an industrial organization this results in a limit to the rate of change of the information set of the firm. It must be below a certain threshold to prevent ³PHOW-GRZQ´RIWKHILUP¶VLQIRUPDWLRQVHW The firm has a boundary with the environment, just as the membrane and the cell wall in organisms. This need not be an identifiable physical boundary, but can also be the result of the procedures around communication and publication of firm specific information. In this respect ZHVWUHVVWKDWWKHILUP¶VLQIRUPDWLRQVHWORVHVLWVYDOXHLILWGLIIXVHVLQWRWKHHQYLURQPHQW7KH integrated set must be unique to the firm, although, as indicated, elements of the set may be freely accessible and still contribute to competitiveness. Protection of information may also take the form of legal instruments such as patenting law. The selection theory we discussed, leads to the conclusion that the final situation in the exploiting a given source of free value, a given niche in the jargon of biological evolution, only supports one dominant information based structure or at least structures that very closely resemble each other in competitiveness. As final element we discuss the life cycle of the phenotype. Certainly in the higher organisms the phenotype does not live forever nor do the products of a firm. Through an embryonic phase, subsequent growth and maturity, it ages and finally dies. This does not apply to the genotype. It livHV RQ LQ D QHZ JHQHUDWLRQ RI WKH RUJDQLVP 7KH SKHQRW\SH KDV EHHQ WHUPHG D ³VXUYLYDO PDFKLQH´ IRU WKH JHQRW\SH E\ 'DZNLQV ,Q IDFW WKH JHQRW\SH LV LPPRUWDO LW VXUYLYHV albeit in a form of increasing complexity. The same hold for the information set of the firm, it survives the products that the company produces and may increase in complexity over the years. 7KHFRPSDQ\¶VFRPSHWHQFHVVXUYLYHPDQ\JHQHUDWLRQVRISURGXFWVDQGDUHSUHFXUVRUVRIZKROO\ QHZ SURGXFWV 2I FRXUVH SDUWV RI WKH FRPSDQ\¶V LQIormation set becoming obsolete and no longer contributing to competitiveness may gradually disappear. 9.8. Differences and similarities between biological and economic evolution. We repeatedly stressed that evolutionary theories of firms and markets should not be interpreted in terms of an analogy to biological evolution. Evolution is an inevitable feature of complex systems that operate away from equilibrium, are processing information, and compete for scarce resources in transformations and transactions that exhibit non-linear kinetics. This situation applies to both economic organizations and biological organisms. This is the main area of similarity between economic and biological evolution. Furthermore, there is the aspect of separation between the information set that ultimately defines competiveness DQGWKHYHKLFOHZLWKZKLFKFRPSHWLWLRQIRUVFDUFHUHVRXUFHVWDNHVSODFHWKHILUP¶VSURGXFWVLQ the broad sense discussed in the preceding section. This is the distinction between the 121
Competition and Selection: The firm and information processing
genotype and the phenotype that applies to both economic and biological organizations. Also the circumstance that the genotype survives the phenotype applies. $QRWKHULPSRUWDQWVLPLODULW\LVWKHFRQFHSWRIWKHHUURUWKUHVKROG,IWKHV\VWHP³H[SHULPHQWV´ too much the information set will melt away and evolution in the direction of increased FRPSHWLWLYHQHVVGRHVQRWRFFXU7RWKHDXWKRU¶VRSLQLRQDQGKHIHHOVVXSSRUWLQWKHOLWHUDWXUH (e.g. Douma and Schreuder (2008)) this applies to both biological systems and firms and other economic institutions. The reverse is also true, if an organism perfectly reproduces its DNA evolution comes to a halt and responding to changes in the environment is no longer possible. Experience shows that organisms seem to operate close to the error threshold and introduce the maximum admissible level of variation in the genome. This also is relevant for economic institutions. If the restrictions to modifying the information set on which the company operates are too strict, it is no longer able to improve the competitiveness of its products and is no longer able to react on competitor moves and other changes in the environment. This phenomenon is well documented in the literature. There also are important differences. The present view in biology is that mutations in the genotype that drive evolution are wholly random. In firms there may be an element of design, but we have already indicated that bounded rationality applies due to the complexities of the environment, the competition and the internal processes inside the firm. A second point is that the DNA of an organism is packaged in its product, the next generation phenotype. This is not the case in the products of firms they do not physically contain the complete information set of the company. Furthermore, the information set, the genome of the firm can be modified without its physical replication. This is partly different in biology, where the DNA must be replicated to produce a new generation. Of course also in biology mechanisms developed that avoid replication for transmission of information. The brain and its derived products like science and technology lead also in biological systems to exogenic evolution and introduce information that is not directly replicated in the next generation of the organism. Finally, but with some hesitation, there is the aspect of the speed of evolution. The information set of the firm is assumed to evolve far more quickly than the pace of biological evolution. This certainly holds for the higher organisms, but applies less to bacteria and notably viruses. One only has to think about the recent experience with the Mexican flue. Finally, we again address the methodological and philosophical problem highlighted in chapter 4 when we discussed the Capital Asset Pricing Model. A model is an abstraction of reality, in all cases a reduced complexity abstraction. It is a physical, verbal or mathematical abstraction that describes features of the system considered interesting by the investigator. An important assumption is that the model and the system are independent. In particular the outcome of the transformations and transactions in the system should not be influenced by the fact that the model exists. If a model becomes part of the system and particularly if the predictions of the model are accepted by the players in the system, the assumption of independence of system behavior and the existence of the model breaks down. This is a significant methodological and philosophical problem. It is akin to the problem in quantum mechanics, where the assumption that the system can be observed without influencing its behavior breaks down. The problem identified above does of course apply to all theories of socioeconomic systems not just to VTT based approaches. 9.9. Conclusion. We have developed a mathematically consistent macroscopic model of the socioeconomic system, VTT. The model is based on a reduced information picture of a complex reality, as almost all models of relevant phenomena in physics. It identifies the forces that drive transactions and reveals their statistical background in the concepts of statistical entropy and 122
Competition and Selection: The firm and information processing
the cost of information. We also stated that the socioeconomic system is a system beyond equilibrium, where significant forces exist and/or are created by the actors in the system. This leads to the conclusion that the socioeconomics system behaves according to a general systems theory of evolution that applies to all system beyond equilibrium where the forces exceed a critical limit. This evolutionary concept predicts that, in such systems, organizations will appear that extract value from the forces present by a process of coupling to these forces. We particularly identified systems that store and process information with a high but not perfect copying fidelity as an outcome of evolution and a source of further, sustained, evolution. The nature of the fluctuations in the system, i.e. deviations from the averaged properties used in the macroscopic description, determines the development of the system at critical branching or bifurcation points. This introduces a historical dimension in the development in such systems. The problem is that, by the very nature of the macroscopic approach to modeling, information about the fluctuations is not present in the model. Hence, the exact future evolution is not predictable, even if we have fully characterized the system, in a macroscopic sense. The evolution of the system is subject to chance and necessity. In the system evolution to more complex organizations, resulting in the extraction of free value in increasing amounts, will take place. We can, however not predict the path of the future evolution, although we can say a few things about likely evolutionary patterns based on IXUWKHUDVVXPSWLRQVDERXWWKHV\VWHP¶VEHKDYLRU,QWKLs book we avoided going into detailed models of the socioeconomic system and this was a purposeful choice that should be understandable on the basis of the reasoning above. This problem does not only apply to the socioeconomic system but is also present in the engineering sciences. This is witnessed by a book of the present author (Roels (1983)), where macroscopic methods were developed for the description of processes involving the activities of microorganism. Also there the systems are far too complex to be described in detail, but useful predictions can be made by macroscopic modeling and careful experimentation. Also in that field too complex models, which cannot be traced experimentally, have little practical value. This observation has also been made b\ 0D\ LQ KLV ERRN ³6WDELOLW\ DQG &RPSOH[LW\ LQ 0RGHO (FRV\VWHPV´ where he studied the prospects of modeling systems of competing biological populations. The additional problem in the socioeconomic system is that we do not have the freedom of experimentation as in systems in which microorganisms appear. In the foregoing we have stated the limitations and prospects of the macroscopic description. We can predict that evolution will take place, we can generalize some features of the evolution in the direction of increased competitiveness, but we have to remain silent about the path of evolution in an individual case. We are captured in a description involving both chance and necessity. The problem becomes very clear if we look at the picture of human evolution (Lewin and Foley (2004)). Some 5-7 millions years our ancestors abandoned their tree dwelling habit and developed walking on two legs as a new way of earning a living. This allowed them to develop some primitive tools to assist them in earning that living. They developed hunting and got access to high quality food in the form of meat. This allowed the development of the EUDLQ ZKLFK FRQVXPHV DERXW RI D KXPDQ¶V WRWDO HQHUJ\ EXGJHW DW RQO\ RI ERG\ weight. This later on, less than a million years ago, led to an increase in the size of their brains by a factor two and later on three compared to the early ancestors. Animal husbandry and agriculture appear some 10,000 years ago. Language and other forms of communication were developed; exogenic evolution took place at an increasing rate. Today we witness our socioeconomic system as an outcome of those developments. Would we have had any chance of predicting these developments 5-7 million years ago?
123
Competition and Selection: The firm and information processing
9.10. Literature cited. Darwin, C. (1872), The Origin of Species, Chapter 4. London, Reprinted by Collier Macmillan, London (1959) Dawkins, R. (1976), The selfish Gene, Oxford University Press, Oxford Douma S., H. Schreuder (2008), Economic Approaches to Organizations, Fourth Edition, Pearson Education, Harlow (UK) Eigen, M. (1971) Self organization of Matter and the Evolution of Biological Macromolecules, Die Naturwissenschaften, 58 (10), 465-523 Eigen, M. (1977), The Hypercycle, A Principle of Natural Self-Organization, Part A: Emergence of the Hypercycle, Die Naturwissenschaften, 64 (11), 541-565 Eigen, M. (1978a), The Hypercycle, A Principle of Natural Self-Organization, Part B: The Abstract Hypercycle, Die Naturwissenschaften, 65, 7-41 Eigen, M. (1978b), The Hypercycle, A Principle of Natural Self-Organization, Part C: The Realistic Hypercycle, 65, (7), 341-369 Lewin, R., R.A. Foley (2004), Principles of Human Evolution, Blackwell Publishing, Oxford May, R.M. (1973), Stability and Complexity in Model, Princeton University Press, Princeton, N.J. Nelson, R.R, S.G. Winter (1982), An Evolutionary Theory of Economic Change, Belknap Press of Harvard University Press, London Nelson R.R. (1987), Understanding Technical Change as an Evolutionary Process, North Holland, Amsterdam Roels, J.A. (1983), Energetics and Kinetics in Biotechnology, Elsevier Biomedical, Amsterdam Schrödinger, E. (1945), What is Life?, Cambridge University Press, Cambridge
124
CHAPTER 10. ECONOMIES, MARKETS AND INDUSTRIES: VALUE TRANSACTION THEORY 10.1. Introduction. The preceding chapters analyzed the principles of a general value transaction theory. We highlighted the difference between true value, that is a conserved quantity by virtue of the first law of VTT, and free value, that creates the driving force behind natural processes, such as economic transactions. We further argued that the difference between value and free value rests in the fact that there only exists a macroscopic, reduced information, picture of reality. It results from the amount of statistical entropy, which quantifies that lack of information, and the fact that information is not a free commodity but comes at a cost. This leads to the conclusion that due to those informational limitations free value is always lower than the true value, i.e. free value results from the following equation: Gi
Wi CI I i
(10.1)
In eqn. 10.1 Gi is the free value of asset i, Wi is the intrinsic or the true value of asset i, C I is the cost of information, I i is the statistical entropy of asset i. Differences in free value, or more precisely differences in the ratio between free value and the G cost of information, mathematically expressed as '( i ) , are the forces that drive processes such CI as transactions. In equilibrium situations where the cost of information and the statistical entropy of assets are the same to all actors, transactions resulting in a return above the risk free return cannot prevail. However, where asymmetries in said quantities exist, free value can be gained by coupling to the forces resulting from the differences in statistical entropy and costs of information. The further development of the theory shows that where differences in free value exist or can be created, it is possible for systems to move away from equilibrium by the mechanism of coupling to the resulting forces. In the near equilibrium region, i.e. where linear phenomenological equations and the reciprocity relations apply, near equilibrium steady states are realized that are stable. Once a steady state is established it does not evolve further. It is the interaction between individual actors that determines the steady state and the creation of organized forms of matter, or broader organizations, does not occur. This leads to a conflict with the phenomena that we daily observe in the biosphere on earth, with it rich variety of highly organized forms of matter and in human society with its economies, institutions and industries. This conflict disappears if we move beyond the strictly linear region. There steady states may become unstable. This requires the existence of, or the creation of, sufficiently large forces. Beyond the linear range more complex dissipative structures may appear. These organized structures develop on the basis of coupling to sources of free value. The structures become increasingly ordered and move in the direction of decreasing statistical entropy. This development is a direct consequence of the fact that beyond the strictly linear range the macroscopic lower order branch becomes unstable with respect to some fluctuations that occur in the system. The resulting dissipative structures are of an information storing and processing nature. In fact these organizations are both the product and the source of gradients in free value. If a certain threshold of complexity of the structures is reached, the diversity of the possible structures becomes very large and the domain of structures that has been explored in the course of the evolution of the systems and their environment becomes much smaller than the range of structures that are accessible. We enter an arena of sustained evolution in which steady states are only locally stable and are challenged if 125
Economies, markets and industries: Value Transaction Theory
fluctuations in the system and its environment become larger. This can lead to a succession of life cycles in which structures appear, grow, become mature and decay by displacement by other emerging structures. This is the realm of sustained evolution. It culminates in complex structures existing of cooperating entities that are based on information and a corresponding functionality. Information and functionality become related in a closed cycle. In fact these structures have been recognized as models of biological systems and here this notion is extended to include a far broader range of phenomena, including technologies, industries and economies. In chapter 9 we analyzed the nature of the firm based on this mental model and we highlighted the correspondences and differences between biological and economic evolution. In the remainder of this chapter this mental model, which goes beyond a mere analogy, is explored further.
Firm Products Assets Market
Resources Raw materials Services Information
Competences
Needs Value Information
Information set
Fig. 10.1. The nature of the firm as an information processing entity.
10.2. The firm as an information processing structure. Fig. 10.1 summarizes the model of the firm as it emerged in chapters 8 and 9. The essence of the firm is a vast amount of information in both explicit and tacit forms. It represents the collective historical experience of the firm and the plans for its future operations based on that experience. The information set allows the firm to perform a large number of tasks and allows it to build and operate tangible and intangible assets to produce and sell its products, to invent and develop new products, and to attract and retain suitable human resources, to mention a few examples. It contains the dynamic capability to adapt its future operations to the picture of reality that is contained in its strategies. The essence of the cooperation between the actors that compose the firm rests in the fact that economies of scale and scope result from their cooperation. This allows the firm to further develop its competences, e.g. through R&D, market research, attracting funds and building tangible assets. These in conjunction are the basis of the competitiveness of the firm. 126
Economies, markets and industries: Value Transaction Theory
At this point it is instructive to again refer to the concept of the efficient market. The efficient market hypothesis states that it is impossible to earn an over average return on the basis of information that is freely available, i.e. information that is in the public domain (Fama (1976), Jensen (1972)). This would, in the terminology of the Capital Asset Pricing Model, imply that it is not possible to earn more than the risk free return (see chapter 4). The concept of free value states that true value and free value are different things and the difference was shown to exist in the statistical entropy that defines the state of information of an actor about the detailed structure of an asset. Differences in statistical entropy multiplied by the cost of information define the asymmetries that drive transactions. This difference between true value and free value is, in a non-equilibrium market, different for buyers and sellers. The essence of the firm is that it has vastly more information and a lower cost of information about its products and the corresponding markets than the buyer. This information generally exists of the value of the resources and the information it has to procure from the environment, the way in which production processes can be optimally designed and the economies of scale that can be harvested, captive knowledge about technologies and competences needed in the optimization and the design of the products and their production processes, knowledge about market needs and the value of its products to its customers. These sources of information have both an internal and an external component. This information allows the supplying firm to produce the products at an expense of free value that is lower than the free value to the buyer based on his higher statistical entropy and/or higher cost of information. Hence, the buyer will pay more than the free value the producer has to spend and the producer can make a profit. The realization of this profit is based on asymmetries in information. Information is seen to be the competitive currency of the firm. This also explains why, just as biological entities, firms have a distinct boundary with the environment. Information and goods and services cannot freely cross that boundary. If such boundary does not exist the ILUP¶V FDSWLYH LQIRUPDWLRQ EHFRPHV SDUW RI WKH SXEOLF GRPDLQ DQG KHQFH WKLV FDXVHV WKH asymmetries in information that drive profitability to evaporate. In the free value formalism the ILUPLVIDUPRUHWKDQD³OHJDOILFWLRQZKLFKVHUYHVDVDQH[XVfor a set of contracting relations EHWZHHQ LQGLYLGXDOV´ DV LQ WKH LQWHUSUHWDWLRQ DFFRUGLQJ WR DJHQF\ FRVW WKHRULHV -HQVHQ DQG Meckling (1976)). The firm becomes a body in which individuals cooperate to maintain and develop captive information that allows trading at over average returns to fuel further growth of the enterprise. The theory also makes clear that without expenditure of information work, and hence investment of free value, the information advantage of the firm can only erode. An expenditure of free value is needed to maintain or grow the position of the firm. In the interpretation of firms and market on the basis of VTT a number of features that have been identified earlier seem to find a consistent framework. Firstly, the concepts of bounded rationality and asymmetries in information find a place. (Williamson (1975), Akerlof (1970), Klein et al. (1978), Knight (1964)). Also aspects of transaction cost theories appear. The firm becomes a vehicle for the management of transaction specific investments, e.g. in R&D, in PDUNHW UHVHDUFK DQG LQ HQJLQHHULQJ $OVR WKH FRQFHSW RI ³WHDP SURGXFWLRQ´ $OFKLDQ DQG Demsetz (1972)) emerges. The latter authors specifically mention the information processing nature of the firm and that there are types of information that can only be obtained at a cost. It is important to stress that bounded rationality arises from the fact that a vast amount of information is needed to ascertain the exact nature of the product that optimally serves to fulfill a (latent) market need and to find the most effective way to produce such a product. Therefore, the information of a firm can be superior but it cannot be perfect. There exists always room for further improvement; sustained evolution inevitably takes place. The uncertainties involved in identifying, producing and supplying a product that optimally extracts free value from an opportunity in the market, allows the firm to invest in obtaining part of that information and to extract over average profits. The uncertainty and the associated risk are the substance from which attractive industries are derived. It is the substance that represents the ultimate creative force. 127
Economies, markets and industries: Value Transaction Theory
10.3. Competition and risk. The preceding section discussed the interaction between a seller firm and a buyer and the dependence of their transactions on asymmetries in information. We stated that a firm is in essence an information processing entity that develops and perfects information sets to optimally compete. This section discusses aspects of the dynamics of competition between different firms for a given or potential free value opportunity. First we revisit the concept of the product-market combination (PMC) in terms of VTT. A PMC reflects a given (or potential) need in the market, i.e. a given (or potential) source of free value. Coupling to the need in the market by products that partly fulfill that need allows the companies to compete for the free value that is contained in the need. A PMC resembles the niche in evolutionary biology. Before proceeding, a problem has to be highlighted. In some of the literature on marketing, a PMC refers to products of a given physical identity, i.e. with homogeneous properties and characteristics. We adopt a different approach. A PMC is defined with respect to a given or potential need in the market. Competition can take place in terms of more efficiently supplying a product with given characteristics, or supplying a product with different characteristics to fulfill the same need. In this way product innovation and product differentiation become a competitive tool and companies with different information stets may compete for the same niche or functionality-market combination with differentiated products. Consider two companies planning to invest in a project geared at the development of a product, i.e. functionality, to supply a (latent) need. In general the firms have different information sets and their perceptions of the potential return from the project will differ. Furthermore different statistical entropies reflect the differences in uncertainty about the future return that can be derived from the project. Also the firms generally have different costs of information. The potential return probability distribution functions are generally different. This will result in different free values for the project as perceived by the firms. On the average the company with the highest free value earns the highest return. It is important to understand that also the superior firm has still elements of risk associated with the choice to embark on the project. These emerge because of the concept of bounded rationality; reality is far too complex to understand it in detail. The estimation of expected value and associated uncertainty are itself always associated with uncertainty and are dependent on the actions of e.g. the competitors and the legislators. The firm may have misjudged the size of the free value opportunity represented by the need it expects to exist in the market. Sometimes the product it has in mind does not fly at all. This is a problem associated with its estimation of the market attractiveness of its product. Also it may have incomplete or erroneous information about the position of its competitor; the company may have incorrectly estimated its competitive position. In addition, there may also be risk elements that are completely internal. Is the UHOHYDQFH DQG VWUHQJWK RI WKH ILUP¶V FRPSHWHQFH EDVH SDUW RI LWV LQIRUPDWLRQ VHW FRUUHFWO\ assessed or has it been overoptimistic? A feature not unknown to companies that rely on R&D based innovations. Has it correctly judged the nature and the impact of the moves of its competitors? This element of bounded rationality and hence irreducible uncertainty and risk in decision making by the management of the firm has been discussed in the pertinent literature. Some approaches to markets and firms even question the relevance of management action in an environment in which there exists irreducible uncertainty (Alchian (1950), Hirshleifer (1977)). In our opinion it is the very evolutionary approach that leads to the conclusion that at the level of a collection of firms, i.e. at the level of the industry structure, the companies have nothing else than best estimates based on their statistical entropy and cost of information. In the long run companies cease to exist if they completely avoid irreducible risk. This introduces an element of luck in the development of individual firms. The classical argument runs as follows (fig.10.2). See also Tinter (1941a, 1941b), Alchian (1950). 128
Economies, markets and industries: Value Transaction Theory
Fig. 10.2. Result maximization under conditions of uncertainty Under conditions of uncertainty the outcome of an action A in terms of return is not completely certain; it is assumed given by the probability distribution depicted in fig. 10.2 and it can be quantified by the statistical entropy that is associated with the distribution. An alternative action B has a probability distribution that is different but overlaps with that of alternative A. Where the probability distributions overlap there are outcomes that result irrespective the choice for A or B. In those cases one can argue, ex-post that the management decision did not matter at all. The theory developed in this book argues that all management decisions are subject to uncertainty. The risk associated with this uncertainty is partly irreducible and partly reducible. The reducible risk depends on the superiority of the information set of the best positioned firm. In a nonequilibrium situation such superiority can exist. Therefore, the management of the superior firm would, taken on the average, have a lower risk profile and would hence, on the average, earn better returns as our value transaction formalism puts a free value penalty on uncertainty. Also the question of choice between scenarios A and B, A being a lower expected result at lower uncertainty, and B a higher expected result at higher uncertainty, can be resolved by translating this into a difference in free value using the cost of information. A case in point in this respect is the pharmaceutical industry, where substantial upfront investments are needed to reach the point where a new drug is introduced, and even at the point of introduction complete certainty about the success of the drug does not exist, putting a sizeable investment into potential jeopardy. Still, there exist firms that prosper in the pharmaceutical industry, whilst others, including the ones WKDWKDYHDOUHDG\GLVDSSHDUHGDUHOHVVVXFFHVVIXO7RWKHDXWKRU¶VRSLQLRQWKLVFDQQRWFRPSOHWHO\ be attributed to luck. The discussion above leads us to touch base with some common approaches to business strategies as these are used in industry. Reference is made to the publications of Porter (1980, 1981, 1985). Approaching strategy formulation can be based on a so-called Sanity Check as depicted in fig.10.3. A business proposition, e.g. the development of a new product, is first evaluated in terms of the market attractiveness, also called the external perspective. Does the market for the product allow attractive returns and are these expected to remain attractive for a sufficiently long period of time?
129
Economies, markets and industries: Value Transaction Theory
Fig. 10.3. The Sanity check of strategic management The value transaction theory approach to this question involves the judgment whether there are needs for functionality in the environment that can create a force in terms of free value and if there are ways to effective couple to that force. This can never be a completely objective assessment of the external perspective alone. It is subject to the limitations of our picture of the external opportunities, which is never complete, but is subject to uncertainty or statistical entropy. We discuss this subject more extensively later on. For the time being we assume that there exists a best estimate of the attractiveness of the market. Subsequently, we turn to the internal perspective. Management needs to judge whether the company is able to deliver the SURGXFW LW KDV WR MXGJH WKH FRPSDQ\¶V DELOLW\ WR FRPSHWH 7KLV LV RQO\ SDUWO\ D FRPSOHWHO\ internal perspective because a benchmark against potential competitors and competing products is required. Finally, an estimate has to be made of the costs and risks of entry and the expected returns. This serves to estimate whether the costs and risks are balanced to such an extent that the project can be expected to lead to a positive flow of free value to the company. This whole procedure is subject to asymmetries in information and hence the outcome is different to the different companies in the market. An important feature of this whole process is that the terminology in terms of market attractiveness, ability to compete and the risk reward profile of the project rests on an illusion that these factors are independent and objectively given. In the previous chapters we repeatedly stressed that the actors and their environment, including the opportunities to create and coupling to sources of free value and forces that can drive transactions, cannot be separated they are created by the true value that is present and the actions of all the actors to create forces and sources of free value based on information sets that are shaped by the competitive forces. Hence, the essence of strategy is to develop information sets that allow the creation of sources of free value and to successfully compete in harvesting this free value. The essence of strategy is to do things different from, and better than, competition and to create and captively exploit an attractive market. This also results in the conclusion that the attractiveness of a market to each of the actors changes in time and success in the present is important to future success, but is by no means a guarantee for future success. Changes in the environment, including the actions of the actors, including the competitors of the firm, may dramatically influence the competitive landscape. The question is not whether a market is attractive, but whether it can become attractive based on the actions of the collective actors. Strategies Evolves. The terminology from ³%HLQJWR%HFRPLQJ´WRXVHWKHZRUGVRI3ULJRJLQH SUHVHQWVWKHDGHTXDWHH[SUHVVLRQIRU the dynamics of industries and markets. Strategy becomes a game rather than an analytical exercise and generic strategies, if these exist at all, are not likely to be lastingly profitable. 130
Economies, markets and industries: Value Transaction Theory
Strategy is about doing things differently and better, rather than to imitate a generic approach. The good part of this message is that reality is so complex and the associated statistical entropy and the opportunities to develop more successful information sets are so numerous that the creativeness of the actors in the market is the only limiting factor. 10.4. The industry life cycle. We will briefly return to the concept of the industry life cycle that we discussed earlier. It was shown that VTT allows an explanation of the industry life cycle on the basis of elementary learning by doing models. A picture of a life cycle of, as an example a PMC, is depicted in fig. 10.4.
Fig. 10.4. The life cycle of a product market combination This picture can be easily explained on the basis of VTT and sustained evolution of dissipative structures. In many instances the opportunity for the creation of a new PMC and hence the opportunity for the harvesting of the assocLDWHGIUHHYDOXHHPHUJHVDSULPDU\LQQRYDWLRQD³IOXFWXDWLRQ´ LQWKH information set of one of the actors in the market. It may be a new understanding of the needs in the market on the basis of advanced market information, it may be related to the use of new resources or information in the environment or it may be related to the development of new basic competences or technologies by one of the players in the market. The market for consumer foods nowadays supports an industry with global sales of over $ 5,000 billion. In the food industry a trend at the consumer level is an interest in health promoting food ingredients, i.e. ingredients that allow production of healthy and tasty food. New developments in science, such as the developments in genomics that allow the sequencing of the genetic information, the genome, of the human species, and a variety of microbial species, allow the development of such ingredients. Also new ingredients come in reach based on new agricultural resources as an example accessible through developments in modern genetics of plants. Generally, the new developments allow players to drastically change the information set on which they operate.
131
Economies, markets and industries: Value Transaction Theory
Market
Information
Product
Choice
Competence
Lead
Fig. 10.5. The innovation Cycle The primary innovation starts a process of learning by doing which is a cyclical process, such as the one depicted in fig. 10.5. It depicts a generalized innovation cycle. The innovation cycle is based on Darwinian evolution and survival of the fittest. The theory underlying such development was the subject of chapters 8 and 9. In the early stages after the primary innovation there is ample room for further perfection of the information sets that the actors employ in supplying the market and competing in the market. Generally, different information sets and associated companies will be competing for the source of free value assuming that it is sufficiently large. This leads to a diversity of companies entering the PMC. In this situation significant asymmetries in information develop to allow a relatively large number of players to earn over average returns. This fuels the further perfection of the information sets in an autocatalytic way. In this way the growth phase of the PMC is reached; we leave the embryonic phase. In this stage flexibility rather than efficiency pays and it becomes important to experiment with the information set to cyclically perfect it, e.g. by product innovation based on R&D. In the late growth phase of the industry the room for improvement of products gradually decreases and efforts cease to pay back in terms of free value that can be harvested. In this phase process innovation, resulting in economically superior ways to produce a product, substitutes the emphasis on product innovation in the earlier stages. The number of players will start to decrease as the less successful competitors are forced out of business. This process intensifies as the mature phase arises and efficiency becomes much more important than flexibility. It is often observed that in this stage the emphasis on process innovation intensifies. Only a few of the most successful competitors remain in operation in this stage. The analogy with biological evolution is clear, after the phase of evolutionary radiation in the growth phase the number of species exploiting a niche decreases, and finally, in many cases, only a few dominant species will occupy the niche. This is known as a process of convergent evolution, where the surviving organisms, and for that matter the remaining companies are, at least phenotypically and often also genotypically, i.e. at the level of the information set, very similar indeed. Again there is a strong case in point in the food industry, where as an example in food ingredients such as thickeners and flavors, the number of players per PMC has decreased consistently and strongly over the years. This theory for the evolution of complex systems is seen to apply to many industries although it may take quite some time before the mature phase is reached, this has to do with the complexity 132
Economies, markets and industries: Value Transaction Theory
of markets needs and hence the room for improving the competitive information sets. The phenomenon of concentration and associated monopoly issues has been discussed in the literature (Demsetz (1973)). The literature suggests that the concentration tendency is indeed supported by the increasing efficacy and efficiency of the larger companies to supply products that match market needs at prices perceived attractive by buyers. Also the aspect of diminishing returns needs to be emphasized. In the early stage of the life cycle there is ample room for improvement of the information set that supports competitiveness. In the later stages the information sets of the remaining players will start to close in on the local optimum based on the early innovative information set. The term local optimum has to be stressed in this respect; it is a local optimum that can be reached on the basis of the type of information set the players use. Although the further perfection of the information sets remains important, it is subject to diminishing returns as increasingly the investments are hard to recover in the market. Only the most advanced players will be able to afford the investment in this later stage. This discussion leads to the conclusion that in early stages of the life cycle effectiveness will pay. It is important to quickly develop the size of the free value opportunity and to quickly harvest it as much as possible under the forces of increasing competition. The reverse is true in the later stages of the cycle; here efficiency is the name of the game. We have already seen, when we discussed the linear free value transducer that maximizing power output, i.e. effectiveness, requires a totally different way of coupling and hence information set, than the maximization of efficiency. In the later stages of the life cycle experimenting with the information sets will be no longer the name of the game, drastic improvements are no longer possible and the successful players tend to stick to their knitting. The remarks made in the preceding section also point to an element of vulnerability. On the one hand the players that remain in business in the mature phase of the life cycle are not likely to be substituted by other players that try to develop information sets that result in basically the same approach. Both the theories of sustained evolution and the hypercyle lead to the conclusion that it will be very difficult to contest the established players in the final stages of evolution in the niche. However, in biological evolution the tendency is that most of the species that appeared in the history of evolution have become extinct. This can be explained from the fact that there always remain a vast number of possibilities to gain competitive advantage based on the richness of the possibilities due to the large amount of information used. In biological evolution this has resulted in an increase of the amount of nucleic acid bases that define the information set, i.e. larger genomes. This tendency has remained in place when exogenic evolution appeared on stage, albeit that the information is no longer stored in DNA molecules. This is witnessed by the increasing technological possibilities in the last odd 200 years of evolution in human society. The possibility of employing drastically new approaches to the information set, e.g. the technologies used, constantly lures around the corner as a challenge to established players, it is one of the main causes of the decay phase of a given PMC. It may be substituted by a radical new primary innovation, often pioneered by players other than the traditional industry contestants. In most cases the chances for new primary innovations are best if there is a radical change in the overall environment in which the contestant compete. Such circumstances may be a drastic FKDQJHLQWKHQHHGLQWKHPDUNHWHJDGUDVWLFFKDQJHLQWKH FRQVXPHU¶VDSSUHFLDWLRQRIIRRG functionality, or the discovery of a new need in the market, e.g. the development of the personal computer that fuelled a number of new industries such as Microsoft. There may be a change in the nature, availability and attractiveness of resources, such as witnessed by the increasing emphasis on renewable resources in view of the disadvantages and the ultimate finiteness of fossil resources. Also technological innovations may have an impact, such as the prospects of genetic engineering and genomics in the pharmaceutical industry. In those phases of turmoil new initiatives stand a chance to replace activities that are entrenched in the old industry. In terms of 133
Economies, markets and industries: Value Transaction Theory
organization it is quite a challenge for existing players to harvest the fruits of the new possibilities in view of the drastically different requirements between late stage industry evolution and early stage capabilities and competences. In many cases an in company entity different from the entity entrenched in a mature phase PMC, is needed to allow the company to survive in the PMC under consideration. Again the analogy with biological evolution is striking, phases of crisis through e.g. extensive flooding of the earth or even the impact of the encounter of the earth with a celestial body such as a meteorite, can provide the space in which new organism appear. An example is the large scale extinction of the dinosaurs some 65 million years ago, presumed to have been caused by the impact of a meteorite. It created the evolutionary space for the radiation of the primates, the early ancestors of the apes from which the human species descends. The difference between the effectiveness and the efficiency phase of the life cycle is also illustrated by a well know law for the growth of biological entities. It is the so-called logistic growth law (e.g. J.A. Roels (1983)). Its mathematical expression is as follows:
rx
P max .C X (1
CX C X ,max .
)
(10.2)
In eqn. 10.2 rX is the rate of growth of the entity, assumed proportional to the harvesting of free value. P max . is the maximum specific growth rate of the entity, C X is the number of entities present and C X ,max . is the so-called carrying capacity of the environment, the number of entities that can be maintained if the free value force in the environment is fully exploited. Fig. 10.6 provides a graphical depiction of this equation for different combinations of the parameters, P max . and C X ,max . .
CX
C X ,max
0 time Fig. 10.6. The logistic growth law. A high P max . and a low C X ,max . , witnessed by the dotted line in fig. 10.6 allows a quick initial penetration but a lower final presence than the reverse case of lower P max . and higher C X ,max . , the solid line. The dotted line variety will dominate in the early stages of the life cycle, whilst the solid line variety becomes more dominant in the final stages. 134
Economies, markets and industries: Value Transaction Theory
Note 10.1. In biological evolution the logistic growth law is often used to illustrate the difference between effectiveness, species that quickly grow into a niche, and efficiency, species that can be sustained at a maximum level, given the carrying capacity of the niche. In evolutionary biology eqn. 10.2 is used in a modified, but mathematically equivalent, form: R
rN (1
N ) K
In which R is the rate of reproduction of the species, r is the specific rate of reproduction, N is the number of organisms in the population and K is the carrying capacity of the niche. A distinction is made between r and K selection. This is the distinction between effectiveness and efficiency evolutionary strategies respectively. 10.5. Industry structure: The nature of entry barriers. 3RUWHU¶V IRUPDOLVP RI DQDO\VLV RI WKH YDOXH FKDLQ LV ZLGHO\ XVHG LQ WKH VWXG\ RI LQGXVWU\ dynamics (Porter (1980, 1981, 1985)). It rests on an analysis of the so-called Porter cross, depicted in fig. 10.7.
Fig. 10.7. Porter Cross for the Analysis of Industry Attractiveness The central box in fig.10.7 shows the industry under consideration and the firms that compete for the sources of free value associated with the need in the market the industry supplies with its products. This process of competition between competitors based on their information sets is known as internal rivalry. The figure shows four forces that, in addition to internal rivalry, are thought of as shaping the industry structure. Two forces relate to the power of suppliers of resources and information and the customers of the industry respectively. These forces are based on asymmetries of information between the firms in the industry and suppliers and customers respectively. In general the attractiveness of the market to the players in it depends on the 135
Economies, markets and industries: Value Transaction Theory
possibilities the actors in the industry have to create, build and maintain asymmetries in information. These possibilities generally decrease with the progression of the life cycle of the industry, although this is partly or wholly offset by the decrease of the number of players and hence the number of outlets and sources for suppliers and customers respectively. Another force relates to new entrants that may enter the market. This will depend on the feasibility of new entrants to match the competitiveness of the already existing players. The difficulty to enter will increase with the progression of the life cycle, certainly if new entrants use information sets comparable with those of already existing players in the industry. As stated, new entrants will stand the best chance for a successful entry in the early stages of the life cycle and/or if they compete based on a totally new information set, e.g. a totally new competence base. Developing basically the same information set from scratch is extremely difficult and costly, certainly in the late growth and in the mature phase of the life cycle. In fact acquisition of a leading actor in the industry is often a more effective tool. In this way the information sets of the new entrant and a player already present are merged. A remark has to be made regarding the likely return that such acquisition has in the long run. There is a buyer and a seller relation and the likelihood that such acquisition is judged profitable in the eyes of both the buyer and the seller depends on asymmetries in information. As an example the buyer may see synergies when the two information sets are merged that are unknown to the seller. Such leverage can exist in the fact that the acquired information set can allow the buyer to improve its position in industries in which it is already entrenched or it can see possibilities to create and exploit totally new market needs based on the merged information sets. In biological terms the merger creates a totally new genotype that may result in a wholly or partly new phenotype, new products and services, with enhanced competitiveness. A special case of potential new entrants arises if existing products are substituted by basically the same product based on a new technology. Again this may provide the turmoil that allows challenging of entrenched players in the industry. Also in this case it may be one of the existing players that pioneers the new approach in an attempt to change the competitive landscape in a mature industry. A new technology often requires a drastically different information set. An H[DPSOHLVWKHGHYHORSPHQWRIWKH³JUHHQ´URXWHVIRUWKHSURGXFWLRQRIȕ-lactam antibiotics that were pioneered by DSM in the 1980s and 1990s.These green routes are more environmentally benign than the old chemical routes. The use of potentially dangerous and toxic solvents and reactants can be avoided in water based enzymatic and fermentation processes. Furthermore, these processes are more efficient, produce less waste and result in lower conversion costs if diligently optimized. This had an important impact on the competitive landscape in the antibiotics industry. The last force in the Porter cross refers to the likelihood that the products of the entrenched actors are substituted by new products, rendering obsolete the existing products. Such a situation can arise most likely if totally new information set, e.g. a technology which already exists in another industry or a totally new emerging technology, is exploited in the industry. Again the existing players can survive if they succeed in acquiring the new information set either organically through own development or by acquisition, licensing or joint ventures. Examples of such developments are numerous in e.g. the birth of modern biotechnology in the early seventies of the 20th century. A pitfall regarding the application of the Porter cross approach rests in treating the industry structure and the forces in a too static way. This results in a too static picture of the attractiveness of the industry. We highlighted that sustained evolution and ³LQYHQWLQJ´QHZVHWVRILQIRUPDWLRQ is the rule and not the exception in dissipative structures of which industries are examples. The players in the industry can invent totally new ways of supplying the need or they can create captive access to supplier resources and information, moving upstream in the value chain. Also
136
Economies, markets and industries: Value Transaction Theory
movements downstream are an option. In all these options alternatives are, in addition to access by organic development, access by acquisition, merger or alliance. To summarize. The attractiveness of an industry depends on the possibilities to develop, grow and maintain asymmetries in information between players in the industry and the rivaling forces indentified in the Porter Cross. An instrument to maintain such forces are R&D efforts to hone the competences of the actors in the industry and also legal opportunities to make information captive such as collaboration agreements and intellectual property strategies. Furthermore, an industry should be approached evolutionary. In the end the attractiveness of an industry depends on the creativeness of the players in the industry and those that represent the other forces. In the end the battle is based on the ability to develop and maintain a sufficiently discriminating competence base, or, alternatively phrased, information set. 10.6. Value Transaction Theory and the value chain. This section further discusses the concept of the industry value chain. A value chain develops in most industries; it refers to a structure such as depicted in fig. 10.8. In our formalism industries develop because there exists a need at the consumer level that allows exploitation by coupling to the corresponding gradient in the ratio of free value to cost of information, i.e. the corresponding force. All industrial activity directly or indirectly derives from such a need.
Consumer
Cons. Prod.
Specialty
Intermediate
Base Prod.
Resource
Fig. 10.8. The value chain of an industry. In most instances those needs cannot be satisfied by resources that can be gathered in the environment in a direct way. Industry bridges the gap that exists between products that are available, or can be made available in the environment, so-called resources, and products that satisfy a need in society. These two positions are the alpha and omega, the beginning and the end, of the industry value chain depicted in fig. 10.8. In principle one firm could perform the whole bridging operation. Practical experience, however, shows that this is, more often than not, not the case. A chain develops in which resources are first converted in general purpose type of base products, followed by steps in which, through intermediates and specialties, the finished product satisfying the need at the consumer end is produced. The development of such a specialization critically depends on the fact that different competences, different information sets, are needed to excel in the various stages of the industry value chain. At the consumer end knowledge of consumer needs is a critical aspect of 137
Economies, markets and industries: Value Transaction Theory
information. This is witnessed by e.g. the big consumer products companies in the food industry, such as Unilever, Danone and Nestlé. These companies spend large amounts of money to perfect their information set regarding consumer behavior and consumer needs. At the resources end the knowledge about effective exploration and sourcing of basic raw materials is critical. An example can be found in the fossil resources (such as oil) industries. Companies like Royal Dutch Shell spend vast resources in exploration technologies to locate new oil reserves. A complete different skill or information base is involved than that of the companies that operate at the consumer end. Clearly those information sets are so different that it pays to specialize. Over time companies have been observed to engage in the integration of lager and smaller parts of the value chain in one company. The overall tendency, however, seems to be in the direction of increasing specialization. The development of this kind of specialization involves at least the following aspects: x
x
x
The size of the industry, i.e. the amount of free value that is available for the players in the value chain. It concerns the total free value that results from transforming the resources into the consumer product (Nicolis and Prigogine (1977)). This is the concept of the wealth of the environment or the extent of the market in economic theory. The fact that the product of the stage of the business column before the activities of the player in that stage can be more economically sourced, i.e. at less expenditure of free value, than it can be produced by the player itself. This reflects the superiority of the information set of the player producing the upstream product. In view of the uncertainties involved in sourcing the product from a supplier outside the firm, and hence the associated decrease in free value, this advantage must be larger than the transaction costs of settling and policing the contractual relation. This bring us back to transaction cost theories, that define various types of uncertainty, i.e. lack of information, in transactions with outside parties, dedication of assets and know-how, and unpredictable, opportunistic behavior of the supplier and buyer. This becomes particularly important if the free value that can be gained by external sourcing is much smaller than the free value that is at stake at the position of the buyer. In this respect the market for branded pharmaceuticals is a case in point. Here the active ingredients price at only a fraction of the price of the finished product. The existence of economies of scale and scope. Often products upstream in the value chain serve more than one market need, i.e. are part of many industry value chains.
Of the above mentioned points the first one is rather straightforward. The second point is more involved and requires some further discussion. In principle the separation of the industry value chain in activities in which separate firms appear can only be stable if it results in an increased free value for all the firms in the column. This is likely to apply if there exist significant differences in the nature of the information sets necessary to optimally operate in the different stages of the value chain. Specialization allows the players to concentrate on their part of the total game and hence to develop the most suitable information set. This is assisted if economies of scale and scope according to the third point apply, i.e. if different value chains are supported by one and the same upstream product. This allows the upstream player to couple to a larger flow of free value and hence to invest more in the perfection of its specific information set in learning by doing. If such economies of scale and scope do not exist it is difficult to see why separation of the industry value chain would result in a situation stable against the forces of evolution. The 138
Economies, markets and industries: Value Transaction Theory
specialization also allows the use of a smaller information set, and hence the rate of evolution can increase as the copying fidelity limit goes down. It is of course also possible to organize sourcing of upstream products through the creation of a separate entity within the downstream player. This becomes more and more likely if the upstream product is dedicated to one single business column. In this respect a hypercycle type of cooperation could result within the downstream firm. 10.7. The internal value chain. If we consider a medium sized or large firm it is fair to say that the information set on which such firm operates in the market is very large indeed. Classical equilibrium pricing theory suggests that sourcing this information in the open market would make most sense and the firm would not be much more than a contracting nexus. The transaction costs formalism (Williamson (1975)) considers the firm and the market as alternatives for sourcing the vast amount of information that is needed in the firm. In section 9.4 we discussed the problems with the processing of large amounts of information. As the information set becomes progressively larger the copying fidelity of the information has to increase. This leaves less room for optimization of the operations of the firm by experimentation or error. It also results in a high inflexibility and little room for adaptation by changing the information set in response to unexpected changes in the environment. This problem is clear in many large corporations, particularly also in the development of new products or the development of new businesses. Many large firms observe that venturing firms, small flexible emerging entities, are more successful in developing new businesses in the early stage of development. This became clear in e.g. the pharmaceutical industry, where the advent of genetic engineering in the early seventies of the 20th century, triggered a number of start-ups that pioneered the development of new pharmaceutical products based on this new emerging technology. Successful examples are Genentech and Chiron. Ultimately, large pharmaceutical houses, already entrenched in the pharmaceutical industry, acquired these companies. Chiron became part of Novartis and Genentech was acquired by Roche. This illustrates the difficulties in replacing entrenched players in an advanced stage of the evolution of an industry. Many large firms try and tried to resolve such difficulties in coping with radical innovation, by creating internal entities geared towards new business development or have, as said, acquired venturing type units. Royal DSM is an example of a company that has created an internal venturing unit.
Fig. 10.9. The internal value chain. A general solution to the problem of handling large sets of information has been described when the hypercycle was discussed in section 9.5. In essence the hypercycle arrangement consists of a number of smaller units that cooperate in a larger functional structure. In this way flexibility is gained by splitting the overall information set in smaller pieces that allow more effective experimentation with the information set and hence a higher ability develops to adapt the set if 139
Economies, markets and industries: Value Transaction Theory
the environment so demands. One has to realize that the hypercycle only works if the units within the cycle cooperate rather than compete. As discussed earlier, this can be expected to be the case if the hypercycle is closed, i.e. if the product of the last information set in the cycle catalyzes the production of the first information set. It remains to be seen what the analogue of closing the cycle in the firm is. This matter will be discussed in a while. Fig. 10.9 depicts a situation where the activities of the firm are split up according to functional departments. This is an arrangement which is not uncommon in industrial practice. The functional departments work together in developing, producing, and delivering the product to the consumer and source information and other resources from the environment. In this way the difficulties in managing a very large information set are relaxed, as the information sets of each of the functional units are smaller. This allows more possibilities to fine-tune the information sets of the various subunits to the requirements of the environment by experimentation. As an example the R&D environment requires more freedom for experimentation due to greater uncertainties, than a production environment, where the efficient operation of costly asset in a reproducible way is the name of the game. 7KLV SLFWXUH OHDGV WR DW OHDVW WKH IROORZLQJ TXHVWLRQV 7KH ILUP¶V PDQDJHPHQW KDV WR GHFLGH which functions should be organized within the hypercycle that makes up the firm and which functions and their information set are to be made available by external sourcing. In principle switching off the direct competition of the functional units within the hypercycle leads to a situation in which the functional units no longer directly compete with units that have the same function in the outside world. Hence, the inborn optimization of effectiveness that characterizes sustained evolution no longer works; the functional units may be less than optimally effective. Here lies an important task for the overall company management. This problem must be compensated by the advantages of having no transaction costs involved in securing reliable supply of the function. In addition, the advantages of captive information may be the key differentiating factor. In general one expects that strategically important functions, of which the information set needs to be captive to the company, are not likely to be candidates for outsourcing. Functions necessary to operations of the company, but not differentiating with respect to competing entities, are likely candidates for outsourcing. This distinction between strategic and non-strategic functions is not as straightforward as it seems. As an example consider the discipline of analysis in industrial R&D. Most of the basic analytical tools are readily available in many institutes or academia. However, using these tools in support of the research activities of a company requires detailed knowledge about the strategic objectives of the research at hand, and requires frequent interaction and sensitive knowledge. In such case, in house facilities can and sometimes will lead to a decisive competitive advantage. It is one of the many examples, where apart from the basic science, the context in which the science is deployed becomes crucial. A question that needs an answer relates to the type of action that would be needed to secure cooperation within the hypercycle in such a way that the best interests of the company are served. If we assume that the individuals operating in the company are to a certain extent seeking the optimization of their own utility function, working in the best interests of the company is not always assured. Somehow activities in the company need to be tuned to optimizing the longer term profitability of the company. This can, as an example, be achieved by defining a remuneration system in which the units are rewarded according to the overall profitability of the company. This is not always easy in a way that is perceived as fair, considering the different contribution of the functional units to the long and short term profitability of the company. Another way in which such a coordination can result is if we appeal to so-called altruistic behavior. This develops if the individual players consider it in their best interests to adhere to company goals rather than sub optimization of their narrow minded own interests. The literature
140
Economies, markets and industries: Value Transaction Theory
argues (Simon, in Dopfer (Ed.) (2005)) that such mechanisms of organizational identification, serve as a mechanism to assure effective coordination. A further potential problem with a split up of information sets over functional departments within a company rests in handling situations where multidisciplinary decision making requiring inputs from the various functional departments to allow identification of the most adequate DFWLRQV 'LIIHUHQW IXQFWLRQDO GHSDUWPHQW WHQG WR VSHDN GLIIHUHQW ³ODQJXDJHV´ DQG HIIHFWLYH communication is far from a trivial issue. This is notably the case in the formulation of effective overall strategies. Leading firms in the industry are known to spend important efforts in time and money to allow multidisciplinary teams to develop a common picture of reality to assist strategic decision making. Another way to cope with internal complexity issues of the firm rests in the creation of divisions or business units. The firm is split up in a way that is considered the most effective way of organization in view of reducing complexity and problems with the optimization and adaptation of increasingly unwieldy information sets. This splitting process can be of a variety of natures. The firm can be split up according to the customer group it serves, e.g. it can be split up in food and personal care directed units. Also the split up can be in terms of products groups, such as Tea, Ice-cream, Oils and fats. It can be split up by position in the external value chain, i.e. base chemicals, fine chemicals and specialties. Also the underlying competences, such as enzyme technology and fermentation technology, can be the organizational principle. In all these cases splitting up results in coordination problems of the types mentioned above. Also the question has to be answered regarding the synergies involved in having these entities as part of an overall firm or whether it would be better to organize these functions outside the firms. The synergies in having the products and their required competence sets internally have to outweigh the costs of complexity and the problems of coordinating diverse information sets. In this respect the food industry has witnessed an increasing tendency of organizing ingredient supply in firms other than the companies operating in the consumer end products markets. In the 1980s Unilever pursued this path when it spun of sizeable ingredients activities to ICI. 10.8. Aspects of competence, technology and R&D management. Competence, technology and R&D management are recognized as important in the management of present day industrial activities. In addition competence and technology development are one of the main drivers fuelling the growth of economies in a longer term perspective.
Realization of potential Embryonic
growth
mature
aging
Fig. 10.10. The Life Cycle of a competence/technology 141
Economies, markets and industries: Value Transaction Theory
After the birth of many industries in the second half of the 19th century, the phenomenon of industrial research emerged and became a key competitive tool in many industries. If we look at individual technologies or competences their dynamics often follow the life cycle behavior that was already introduced in the context of market and industry dynamics. Fig. 10.10 presents such picture of the life cycle of a technology or competence. Such a life cycle often also applies to the academic scientific disciplines that underlie such competences and/or technologies. The development of a competence or technology generally follows the pattern that is characteristic for evolution as we see it in biology and markets and industries. The survival value of the technology or competence is judged against the potential other technologies that compete for the same resources. We witness the birth of a new information set or at least a drastically new addition to the information set that defines the competence. The phases that follow are the growth phase, the mature phase and finally the decay phase. In the decay phase the relative importance of the competence decreases; often because of its substitution by a potentially more powerful technology. The embryonic phase may develop in academia or in the R&D programs of a company operating in the industry. In a company it often results from the longer term competence oriented research programs. It represents the arena of the strategic research that is not directly geared towards the development of specific new products or processes. Its goal is to further the competence base as a whole in antiFLSDWLRQRIWKHIXWXUHQHHGVIRUDGGLWLRQVWRWKHFRPSDQ\¶VLQIRUPDWLRQVHW$Q example is again the development of the modern biotechnology in the beginning of the seventies of the 20th century. It was pioneered in academia, but soon ended up in venture firms and became part of the competence base of many of the pharmaceutical firms. Apart from the life cycle of the technology itself, another element of a competence or technology needs to be taken into account. As we discussed before, in a system involved in competition for a scarce source of free value it is the impact on the competitiveness of the information set of which the competence or technology is a part that defines whether or not it will be selected in the further evolution of the company. Table 10.1 illustrates this. Emerging technologies have an uncertain but potentially large influence on the competitiveness of the company in its industry. Pacing technologies have the potential to change the competitive landscape in the industry but have not yet materialized in products and processes in the industry in which you compete. Key technologies are critical to competitiveness in your industry, as these will change the competitive landscape and are already proven in products and processes in your industry. These competences or technologies result in products and processes that challenge existing products and processes. Finally, base technologies are a qualifier, a need to have, for successful competition in your industry, but these offer only meager chances for lasting competitive advantage. Typically these are widely available in the industry and can be easily accessed. Table 10.1. Competitive impact of competences/technologies
Descriptor Competitive Impact Emerging Pacing
Key Base
142
Technologies having a potentially large but uncertain impact on competitiveness in the industry Technologies with the potential to revolutionize the competitive landscape through product and/or process innovation. Have not yet been embodied in a process or a product in the industry. Can become key technologies. Technologies critical to successfully competing in the industry. Can lead to radical innovations. Technologies that are qualifiers and a must for success in the industry. Offer limited potential for lasting competitive advantage. Typically widespread and readily available
Economies, markets and industries: Value Transaction Theory
The life cycle and competitive impact are two different animals. The life cycle reflects the development of the technology in its own right, irrespective the specific industry under consideration. The competitive impact reflects the influence of the technology on the competitiveness in a specific industry. It is often the case that one industry, where the potential of the technology is recognized to be large in an early stage of its life cycle, is largely responsible for pioneering the development of the basic technology and its early embodiment in products and processes. Later on these radical technologies become recognized to be vital to other industry sectors and colonize the information sets of other industries. Again the developments in modern biotechnology are a case in point. Here, after its emergence in academia and start up firms, the development was driven by the potential in the pharmaceutical industry, quickly followed by plant biotechnology in the seed industry. Later on it progressively penetrated the chemical industry and the food industry and is now well established in a wide range of industry sectors. In this respect managing the technology and competence base of a firm is by no means an easy task. A considerable period of time may elapse before an embryonic technology becomes embodied in products, i.e. starts to be coupled to sources of free value in the market and become a source of income for the company. This in addition involves large uncertainties, i.e. the potential free value that can be extracted from the technology remains unclear for a considerable period of time whilst investments that compete for scarce resources are needed early on. A period of ten years between the first eIIRUWV DQG WKH UHDOL]DWLRQ RI WKH WHFKQRORJ\ RU FRPSHWHQFH LQ WKH ILUP¶V products and processes is by no means exceptional. Furthermore, moving too early in the technology, i.e. in the embryonic or early growth phase, often results in prohibitive costs and decreased competitiveness in the long run. Unfortunately, the reverse is also true. To acquire the information set that is characteristic for the technology becomes increasingly difficult as the life cycle of the technology progresses. This particularly holds for the acquisition of the information set characteristic for a technology already established as key in your industry. In these late stages organic entry may prove very difficult and/or unrealistically risky and costly. Alliances and acquisitions prove to be more feasible in those cases. DSM recognized the impact of biotechnology to its activities in chemistry in a quite early stage, and developed in house activities in the eighties of the 20th century. Later on in the early and late nineties it further acquired the technology; first by its joint venture Chemferm with Gist-brocades. Chemferm was an entity that effectively changed the technology base for the production of semi-synthetic penicillins by the already mentioned introduction of the far more competitive and HQYLURQPHQWDOO\ EHQLJQ ³JUHHQ´ HQ]\PH EDVHG WHFKQRORJLHV /DWHU RQ '60 DFTXLUHG *LVWbrocades to deploy the technology over a wider range of businesses. Another distinctive characteristic of technology management rests in the relative position of the company with respect to existing and potential competitors in its industry. Table 10.2 summarizes the basic features. The competitive position ranges from dominant to strong, favorable, tenable and weak according to the descriptors in Table 10.2. The probability of successfully competing becomes more uncertain, and hence a project more risky, if we move from a dominant to a weak position. Accordingly the likelihood that the company will extract free value in excess of its investment will progressively decrease.
143
Economies, markets and industries: Value Transaction Theory
Table 10.2. Competitive position with respect to competences and Technologies
Descriptor Dominant
Strong
Favorable
Tenable
Weak
Characteristics x x x x x x x x x x
Powerful technology leader High commitment, manpower, funds Well recognized in its industry Sets pace and direction in technology; competitors in catch-up mode Able to take independent action and set new directions Commitment and effectiveness consistently high Technological position differentiates its business from competitors Able to sustain technological competiveness in its business Has the strength to improve its competitive position Not a technology leader except in niches
x In a catch-up mode x Unable to set independent course x Can maintain competiveness, but unable to differentiate from competition x Losing ground in technological competitiveness x Short term fire-fighting focus x Difficult and costly to turn around on own strength
The development of new competences and related new products and services involves considerable uncertainties and through the related exposure appreciable investment risk. The formalism introduced above derives from Rousell et al. (1991). Again we take the pharmaceutical industry as a case in point. There we observe that the development and market introduction of a successful new drug involves an expenditure in the RUGHURI¼PLOOLRQ,WPD\WDNHRYHUWHQ\HDUVDQGPDQ\OHDGVQHHGWREHH[SORUHGWRUHVXOW in one hit that reaches the market and contributes to free value gain. Therefore R&D based new business development usually employs a staged approach, geared at reducing exposure to uncertainty in the early stages of the process. The early stages of the process are characterized by large statistical entropy I 0 . The objective of the development trajectory is to generate the information that reduces the statistical entropy to near zero when the product is introduced. This is in practice never the case; significant statistical entropy remains at the moment the product is introduced. Of course, one need not limit the staging to one feasibility stage, more stages can be introduced, particularly if the uncertainty is large and the overall exposure is large. This is exemplified by the practices in the pharmaceutical industry, where leads that emerge from the exploratory phase go through a process of clinical testing that consists of three or four stages in which the exposure gradually increases as the uncertainty is reduced. We will now discuss another issue regarding industrial R&D. It concerns the relation between science in the academic sense, and competence as it develops in industries as a key competitive tool. The distinction between academic science proper and the industrial science community is large and subtle. In the academic science the focus is on understanding reality as it is presented to XV 7KRPDV .XKQ VWDWHV LQ KLV ERRN ³7KH 6WUXFWXUH RI 6FLHQWLILF 5HYROXWLRQV´ WKDW scientific progress follows a cycle of development analogous to that of a competence or a technology. It follows a life cycle. In the so-called pre-paradigm phase an established view on explaining a scientific problem does not exist, there are several related theories that are proposed to explain aspects of reality. This is very akin to the embryonic phase in the general life cycle; many new innovative approaches compete to explain a certain phenomenon. After this early 144
Economies, markets and industries: Value Transaction Theory
period consensus on the preferred paradigm develops. The selected paradigm enters a growth phase; its application to a broad range of observations is sophisticated further in a kind of VFLHQWLILF³SX]]OHVROYLQJ´IROORZLQJVWULFWO\GHILQHGUXOHV7KHVFLHQWLILFFRPPXQLW\ORFNVLQWR a certain approach to the interpretation and explanation of the aspects of reality that are the subject of scientific investigation. This process continues until the paradigm enters the mature phase in which its explanatory powers gets exhausted. This generally also is a phase in which conflicting evidence appears that challenges the validity of the paradigm that the scientific community adheres to. Generally the scientific community resists the challenge and adheres to the accepted paradigm. This process continues until the evidence against the reigning paradigm becomes overwhelming and a scientific revolution results. A new paradigm appears that allows new avenues to the explanation of reality. In many instances the innovation is due to outsiders in the then dominant paradigm. The beginning of the nineteenth century witnesses a number of such developments. The theory of general relativity, triggered by Einstein, challenged the existence of an objective time. The introduction of quantum mechanics introduced the notion that the very process of experimentation, the interaction between reality and the observer, may change the behavior of reality. This leads to challenging the existence of an objective reality, independent of the process of observation. Later on the science of complexity introduced the basis of evolutionary theories by the notion that in complex systems the future is not contained in the past and the present and hence is fundamentally unpredictable. The way in which science evolves looks much like the normal path of evolution in a broader sense. Once a certain information set has been chosen, it pays to adhere to that sets general characteristics. Too drastic and/or too frequent changes in the basic information set lead to a situation in which evolution makes no progress. It pays to lock into the existing set and improve it by small changes. This process continues until a crisis is reached where the existing information set needs to be changed drastically. Once a revolutionary new information set, with new evolutionary potential, appears on stage the process repeats itself and new progress is possible. The locking in behavior that Kuhn observes is the very substance making scientific progress possible. No doubt this has been an important contributor to the undeniable success of the scientific approach in the improvement of the understanding of reality that was very instrumental to technological and economic progress. In industrial research the emphasis should not be in the explanatory area, the area that could be WHUPHGWKH³NQRZZK\´RIFRPSHWHQFHVDQGGLVFLSOLQHV,WLVPRUHLQWKH³NQRZ-KRZ´DUHDWKH knowledge about the realization of products and processes that are useful in a commercial sense. Explaining things is secondary and only interesting as far as it allows more economic realization RI LQQRYDWLRQV LQ WKH PDUNHW SODFH WKDW FRQWULEXWH WR WKH ILUP¶V ORQJ WHUP Fompetitiveness. A scientific method is a tool for the realization of commercially attractive results, rather than an objective in itself. In some instances new scientific approaches are seen to result in new competences that lead to successful innovations in the market. This has most probably been the case in the case of the emergence of genetic engineering in the early 1970s. In other cases industries innovations lead to new questions that precede their full scientific understanding. In this way a cyclic interaction develops between science proper and the development of R&D based innovations in industry. Scientific knowledge is by no means a prerequisite for industrial innovations. This is witnessed by the early deployment of microorganisms in industry; this preceded the understanding of microorganisms and the nature of the genetic code as the basis of the functionality of these organisms. In fact this cyclic interaction between science proper and industrial research provides a source of potentially fruitful cooperation between the scientific communities in private enterprises and public bodies, such as universities. This is exemplified by the approach to so-called Leading Technological Institutes, as it was developed by the Dutch government in the last decades of the 20th century. An example is TIFN, the Top Institute Food and Nutrition in which the present author participated. Other examples of 145
Economies, markets and industries: Value Transaction Theory
VXFKLQLWLDWLYHVFDOOHG3ULYDWH3XEOLF3DUWQHUVKLSVQLFNQDPHG333¶V H[LVWDQGDUHVHHQWRRIWHQ lead to fruitful interactions between academia and industry. As said, TIFN is an example of a PPP. A presumed problem with industry-academia collaborations and with collaborations between several companies and knowledge partners is that the knowledge developed is shared by players that may be wholly or in part competitors. Also the need to publish the results is often presumed a problem. Of course, in the latter case patent protection prior to publication is a strategy; it is followed in TIFN. We argued that information that is part of the public domain still can, if integrated with the information set of a company, lead to an increased competitiveness even if competitors avail of the same information. This can be compared to the situation in biology, where e.g. the information regarding the energy factories of living cells, the mitochondria, is shared by many species. Still this variety of organism, using the same energy generation principle, is differently successful in terms of survival value, i.e. competitiveness. Generally, the companies collaborating in a multi company, multi knowledge institute collaboration, have different information sets and will hence benefit differently from integration of the knowledge generated in the collaboration in their information set. The result often is that all companies become more competitive at lower costs if the collaboration is properly managed. In the context of this section on Competence, R&D and technology management we will now discuss the 20th century developments in economic theory and the role of technology in the development of the economy. These elements are the subject of both exogenous and endogenous approaches to economic growth (Aghion and Howitt (1998), Romer (1990) and Solow (1956)). In these approaches the dynamics of technology is assumed to be an important factor underlying growth. The exogenous approach assumes technological change given, determined by factors outside the economy. In contrast the endogenous approach assumes technological change to be caused by activities within the economic system, i.e. technological change depends on the activities of the actors in the system. Examples are private initiatives such as industrial R&D and government activities to provide stimuli to R&D, such as tax incentives or public R&D in academia or institutes. The first problem we have to tackle is where the economic system begins and where it ends, i.e. with respect to which frame of reference do we define exogenous and endogenous. Are government activities exogenous or are these endogenous and part of the economic system? We will take the latter perspective. A further problem is an almost philosophical one; it refers to the matter of cause and effect. The VTT evolutionary perspective defines cyclical interactions between causes and effects. An example is the question regarding the driver behind evolution of economies. Is the driver variation of the genome of the firms or is it selection at the level of the phenotype. We have argued that once the cycle is closed the familiar distinction between causes and effects becomes blurred. Cause and effect merge due to the cyclical interaction. In discussing the concept of sustained evolution we saw that even the forces that drive evolution are both cause and effect, the very activities of the actors in the market are agents in the determination of the size of the forces and hence also the forces that drive evolution are wholly or at least to a major extent endogenous to the system. The only really exogenous factor is the primary source of non-equilibrium on earth, the abundant free energy supplied by the solar radiation. The discussion in this work shows that technological change is part of the essence of the development of the economic system. It is an important aspect of the development of superior information sets and thus drives economic evolution. Theories that do not consider this factor in fact ignore the overwhelmingly important aspects of the evolution of information sets to increased competitiveness. Technological change is a very important aspect of economic growth certainly in the mid to long term perspective. We therefore have to reach the conclusion that technological change is wholly endogenous and is shaped by the actors in the economic system. Today this is the dominant view in economic theory. 146
Economies, markets and industries: Value Transaction Theory
10.9. An evolutionary view of corporate development. This section analyses aspects of corporate strategy and corporate development in terms of the VTT formalism. We treat a case highlighting the development of the Dutch company Gistbrocades. The analysis presents a broad brush approach to the development of that company over time. This presents a stepping stone for the formulation of some general features of corporate strategy and corporate development. For convenience sake, we study the period of evolution of Gist-brocades unto the point where it merged with DSM in 1998. The inclusion of the, very interesting, period after the merger is omitted. This hardly changes the message conveyed. Gist-EURFDGHVZDVIRXQGHGDV³1HGHUODQGVH*LVW HQ6SLULWXV)DEULHN´1*6) LQ,WZDVD SURGXFHURIEDNHU¶V\HDVWDQGWKHDVVRFLDWHGSURGXFWDOFRKRO,WVIRXQGLQJZDVEDVHGRQDFOHDU market need as the community of bakers was dissatisfied with the yeast sources available in those days. The production of its products was based on fermentation technology based on agricultural resources, such as grains. Fermentation was in those days a quite elusive and new technology. It was made possible by the scientific understanding that microorganisms are the source of useful products such as ethanol. Today this would be termed the competence to use a microorganism, more specifically a eukaryotic organism, Saccharomyces cerevisiae, to produce XVHIXO SURGXFWV ,Q WKLV FDVH EDNHU¶V \HDVW IRU WKH OHDYHQLQJ RI EUHDd and ethanol, for e.g. beverages and perfumes. In the 1990s Gist-brocades was a co-leader in the European market for EDNHU¶V\HDVWDQGLWKDGODUJHO\GLVFRQWLQXHGWKHSURGXFWLRQRIHWKDQRO The first half of the 20th century witnesses the addition of industrial chemicals such as acetone and butanol, geared at totally different markets, to the pallet of products of the company. The basis of this expansion is that fermentation, in this case fermentation of bacteria, produces these products from agricultural resources such as grains and sugar containing crops. The company achieved this step into a new market by expansion into a slightly different, but much related FRPSHWHQFHWKDQWKHRQHWKHFRPSDQ\ZDVIDPLOLDUZLWKWKURXJKLWEDNHU¶V\HDVWEDVHGDFWLYLWLHV Fermentations based on bacteria were introduced, this expanded the competence base and the information set of the company by adding a different, but closely related, type of information. Today we call this an example of competence based expansion. It allowed the company to extract free value from a new market need. The opportunity was fuelled by the First World War that shifted the pattern of demand for industrial chemicals. We now expand a little bit on the notion of a competence. A competence (Prahalad and Hamel (1990)) is a complex integration of disciplines, technologies and other pieces of knowledge and routines. In general it can be used to produce useful products for a number of markets. In addition, it is difficult to copy, because it is a complex integration of disciplines that has been honed by competing for sources of free value over a number of years. Finally, it has a distinctive influence on the ability of the company to make available products that allow it to couple to sources of free value, fuelled by market needs. It has a distinctive impact on competiveness. A competence is, in line with the general formalism of VTT of an informational nature, it can be XQGHUVWRRG WR EH SDUW RI WKH ³'1$´ RI WKH FRPSDQ\ 7KH UHFRJQLWLRQ RI WKH LPSRUWDQFH RI competence development resulted the early founding of an R&D department in NGSF at the end of the 19th century. Today the production of industrial chemicals is the realm of the fossil resources based industries and the products have disappeared from the portfolio of the company. Clearly the company was not able to follow the shift in the resource base of the production of the products as this requires a totally different information set. With the disappearance of these products, this disappearance of the phenotype, the fermentation competence remained part of the genotype of the company. This is a clear example of the genotype surviving the phenotype of the company. 147
Economies, markets and industries: Value Transaction Theory
At the end of the Second World War a new opportunity arises to significantly expand the business scope of NGSF. During the War, the Allies developed Penicillin production based on the fermentation of the mould Penicillium chrysogenum. This was based on the discovery of Penicillin as an antibiotic substance in 1928 by Fleming (1929). The Allies used penicillin at the end of the Second World War to treat battlefield infections, an important cause of impairment of the soldiers. Again, it are the fermentation capabilities, competence based expansion that allows NGSF to enter a new market, the pharmaceutical market, shortly after 1945. At the date of the merger with DSM, Gist-brocades was the largest producer of Penicillins worldwide. Penicillin is naturally produced in two varieties, called Penicillin G and Penicillin V; these products have in common DFKHPLFDOVWUXFWXUHFDOOHGWKHȕ-Lactam moiety, but differ in a side chain that is attached to that moiety. Penicillins proved to be quite successful antibiotics and variations on the chemical structure of the penicillins were developed, these are termed semi-synthetic penicillins. Ampicillin and amoxicillin are notable examples. Amoxicillin is still the most widespread product used in antibiotic therapy. To enter the market for semi-synthetic penicillins on the basis of the fermented product Penicillin G, the company needed to acquire a new skill, a new information set. Expertise on the chemical modification of the fermentation product needed to be developed. The company did that in a largely organic way, based on sizeable R&D-efforts. At the end of the 1950s the kernel develops for an important new major step in the beginning of the 1960s. A new competence based expansion in fermentation is realized. The introduction of protein splitting enzymes in household detergents leads to a new market for products based on fermentative production. To this opportunity an important range of enzyme based products was added. The market for industrial enzymes developed. The company develops into a large player in industrial enzymes, although it sold its production of a number of the larger commodity enzymes in the 1990s. The expansion into the industrial enzymes market needed new additions to the competence base. Application know-how and modern genetics, e.g. recombinant DNA technology and later on genomics for the production of HQ]\PHV DXJPHQWHG WKH FRPSDQ\¶V JHQRPH 7KLV LV DJDLQ ODUJHO\ EDVHG RQ 5 ' GULYHQ organic development, although some critical technologies derived from acquiring know-how from third parties. A distinctive new competence was added to the skill bases in fermentation WHFKQRORJ\DQGFKHPLVWU\UHVXOWLQJLQDVLJQLILFDQWEURDGHQLQJRIWKHFRPSDQ\¶VLQIRUPDWLRQ set. At the end of the 1960s the presence of NGSF in the pharmaceutical market through penicillin prompts a major move. The company merges with Brocades, a company operating in the market for pharmaceutical products to form Gist-brocades. This is an example of a customer based expansion of the company, completely different from the earlier competence based expansions. With the merger, the company acquires new capabilities in marketing pharmaceutical specialties and the development of new pharmaceutical products. That merger takes place in an era when the pharmaceutical industry still shows appreciable growth. Clearly still in the growth phase of the industry. In the eighties and nineties of the 20th century the pharmaceutical industry shows signs of consolidation and the players in the industry rapidly grows in size and become dependent on sharply increasing efforts and expenditure in new product development. The activities of Gist-brocades were not able to keep pace with these developments and the company decided to sell its pharmaceutical activities to the Japanese pharmaceutical company Yamanouchi at the end of the eighties, whilst retaining its position LQIHUPHQWDWLRQDQGFKHPLVWU\EDVHGLQJUHGLHQWVDQGDFWLYHSURGXFWVIRUȕ-Lactam antibiotics. The beginning of the 1980s welcomes a new customer based expansion of Gist-brocades, DJDLQE\DFTXLVLWLRQ7KHSRVLWLRQLQEDNHU¶V\HDVWH[SDQGVE\PRYLQg into other products for the production of bread and pastry. The company enters the markets for bread improvers and 148
Economies, markets and industries: Value Transaction Theory
pastry ingredients. This is again reflected in a sizeable expansion of its competence base, the information set on which it operates is again diversified. This also leads to new outlets for its existing technology base in enzymes, as bread improver enzymes are added to its product portfolio. Such market based expansions, witnessed by Gist-EURFDGHV¶ H[SDQVLRQ LQ pharmaceuticals and bakery products will be termed a customer specialist types of expansion. 1870
1945
Yeast/alcohol
fermentation
Chemicals
ind. chemistry
Penicillin
fermentation ind. chemistry
1965 Pharma/ Enzymes
drug developm. enzyme techn.
Bakery Ingr.
appl. bakery
1990 modern genetics
Fig. 10.11. The corporate history of Gist-brocades 7KLV ELUG¶V H\H YLHZ RI WKH GHYHORSPHQW RI *LVW-brocades unto the merger with DSM is summarized in fig. 10.11. After this discussion of Gist-brocades and its historical development and evolution, we embark on generalization of the picture. A company starts out as a narrow market-competence FRPELQDWLRQ ,W HQWHUV D ³QLFKH´ PDUNHW UHSUHVHQWLQJ D QHHG WR ZKLFK LW FRXSles to generate free value. To supply the product to couple to the free value source an adequate information set, a competence base, is needed. It can be acquired from the outside or it can be organically developed. Also the reverse can be true, the company may have developed a more or less captive competence base, e.g. based on new scientific developments, for which it sees opportunity in the creation of and coupling to a source of free value. In this way an initial customer base and a to an extent matching competence base result. In the beginning, this competence base is far from being perfect and it is further developed and the customer base is expanded by co-evolution with the gradually perfecting competence base or information set. After a while the company sees a new outlet for its activities. As an example, this may exist in new markets that are made or become available to the company based on its existing competence base. This is a critical strategic moment in the history of the company. In almost all cases such expansion involves larger statistical entropy, uncertainty, than furthering the existing customer base/competence combination. It will involve a substantial risk and exposure, and the future success is far from certain. A point is reached where company management needs to make a bounded rationality decision to grasp the opportunity or not. Both decisions have an important historical significance in terms of the unknown future development paths that become open to the company. In terms of the theories of evolution of dissipative structures a bifurcation point is reached. After entering the new market the company no doubt finds out that its competence base is not optimally suited to the new market and it sees the need to modify its information set to optimally serve the new 149
Economies, markets and industries: Value Transaction Theory
market. This can be realized by gradual adaptation of its competence base to the new customer base, or acquisition of an already existing competence in the market. Such an acquisition is again a major critical strategic move that may further shape the ranges of futures that are open to the company. This development is depicted in fig. 10.12.
Market A
Market B
Market C
Comp. A
Comp. B
Comp. C
Fig. 10.12. Co-evolution of market exposure and competence base. After a while the process is repeated and the competence base needs to change accordingly In the figure the co-evolution of the company and its competence base is illustrated.
Number of markets
many
one
competence driven
one stop shop
niche player
customer driven
one
many Number of competences Fig. 10.13. Corporate development strategies
The strategies deployed in the development of a corporation are summarized in the sketchy diagram depicted in fig. 10.13. An emerging company starts out as a niche player based on one market and a product. From 150
Economies, markets and industries: Value Transaction Theory
there it can move in basically two directions. It can take its competence base as a starting point and move to new markets. Its competence base will stay rather homogeneous, although it has to augment its basic competence with market specific aspects of the competence. Market specific market knowledge and applications know how have to be integrated in the competence base. In many companies the growth resulting from this strategy leads to a partial splitting of the company in market specific business units. The business unit develops its own market specific competence base, and the span of control of management can be relaxed. Also the rate of experimentation with the competence base can be greater without getting in problems with the copying fidelity limit. The problem associated with this move, is that the focus on the basic competence diminishes and the synergy fruits of the combined skill bases will be lost. This can be avoided if the business unit structure is not followed for all parts of the internal value chain, e.g. the basic competence can still be fostered in e.g. a central R&D unit. The second approach with reference to fig. 10.13 is that the company sticks to its market and develops a broader range of competences to support a growing range of needs in the specific market. This generally results in a less focused competence base and often needs to be done through acquisition of new competences. Examples of both of the developments highlighted above appear in the historic development of Gist-brocades. Of course, a mixture of these strategies is also possible, and the ultimate picture is that of the one-stop shop, many markets and many competences. History in industry shows that this latter strategy is difficult and success in the long run is far from guaranteed. A for this discussion final feature of corporate development strategies is pictured in the bifurcation diagram depicted in fig. 10.14. The start of the diagram shows the start up of the company and it enters a phase of gradual development unto point A in the diagram.
B
A
C
Fig. 10.14. A bifurcation diagram of the development of a corporation. At point A a critical decision regarding corporate development needs to be made, it reflects a critical strategic decision. An example of such a point is the entrance of Gist-brocades in the penicillin market in the last stages of World War II. As soon as the decision is taken the company gets either locked into the branch AB or the branch AC. Both branches lead to different critical strategic opportunities at point C or B. By repetitive strategic decision points 151
Economies, markets and industries: Value Transaction Theory
very different historical paths are open to the development of one and the same primary initiative. The difficulty in managing such a process is that, at the time of the strategic decisions, the future opportunities that may result are not known, whilst the possible path of further development are critically dependent on the option that is chosen. The historical path the company has taken can only be reconstructed ex-post. Many examples of such developments can be seen if the history of different companies is analyzed. The history of Gist-brocades has been discussed earlier. It entered penicillin production at the end of World War II. Based on this it attempted an entry in the by then growing market for ethical pharmaceuticals when NGSF merged with Brocades in the late 1960s to form Gist-brocades. This entry did not last as Gist-brocades sold it ethical pharmaceuticals business in the beginning of the nineties. The company became a producer of food ingredients, enzymes and active ingredients for penicillins and cephalosporins. The nowadays leading ethical pharmaceuticals company Pfizer made the same decision in the said world war, it entered production of penicillin. This was based on fermentation know-how that emerged from its activities in the production of citric acid by fermentation. The company pioneered fermentative production of citric acid in 1919 to decrease its dependence on citric acid from citrus fruits. It spun of the production of citric acid and discontinued the production of penicillins towards the end of the 20th century and it became one of the leading pharmaceutical houses with a successful track record in the development of new ethical drug ever since the 1960s. This example shows that companies with a very similar competence base in the past can show historical developments that result in very different companies with drastically different competences and information sets to date. 10.10. Aspects of Joint Ventures, Divestitures, Mergers and Acquisitions. This section analyses some selected aspects of joint ventures, divestitures, mergers and acquisitions from the perspective of VTT. It is definitively outside the scope of this work to exhaustively treat these important subjects. We distinguish two limiting cases of different complexity. Firstly, we consider the cases where the entities involved in the moves described are separate units that can be detached from the mother unit. We assume that the whole genotype and the whole phenotype of the units can be transferred to the new entity. However, even in this straightforward case, there can be a number of difficulties, if the units involved in the deal were part of a larger conglomerate. Let us first consider the phenotype, i.e. the products, understood in the broad sense introduced in section 9.7. Even if we can completely separate the physical products, including the services, there may be aspects that get lost if we separate DQHQWLW\IURPWKHPRWKHUFRPSDQ\7KLVPD\EHWKHPRWKHUFRPSDQ\¶VRYHUDOOLPDJHVXFK as the strength of its brands, its perceived financial power and its attractiveness as an employer. The later part may be especially relevant with respect to tacit aspects, such as the knowledge and the image of the key personnel that may be difficult to retain in the new entity. The same does of course hold for the companies information set, the genotype; can it be effectively transferred to the new unit and will we be able to retain crucial players and their tacit knowledge. Such considerations apply to both the company that divests and the company that acquires. The situation becomes more complex if, either for the phenotype or the genotype or both, a complete separation and transfer cannot be practiced. To illustrate this we will revisit the example of a biological system. In a biological system we can, at the genome, identify individual pieces of information that code for a certain functionality at the protein level. However, the contribution of the gene to the overall competitiveness of the organism cannot be isolated from the interplay with the other genes, the other pieces of the information set. 152
Economies, markets and industries: Value Transaction Theory
Removing a gene, or sharing the gene with a potential competitor, may unexpectedly alter the overall competitive arena. This takes effect through the phenotype where the interplay between all the products of the genome determines competitiveness. Again also aspects of company image or brand strength that remains with the mother company may have a decisive effect. This may again also impact on the prospects of retaining key personnel and their contribution to tacit aspects of the information set. It is important to determine what part of the mother genotype and phenotype are part of the deal and, importantly, also the aspects that are not part of the deal. How is it assured that the mother company no longer uses parts of the genotype that is transferred outside the activities that remain in the mother and uses these to directly or indirectly compete with the new entity in the business that is transferred? In case of a divestiture the delivering company has to decide what it divests and how this will impact the present and future competiveness of the mother unit in its existing products and, importantly, its prospect for future strategic moves. Spinning off parts of the phenotype and the genotype may have an impact on the prospects of developing of new products or new businesses. Which products are transferred? Are these transferred for all present and future markets or only for a limited number of markets? Who are the people that are transferred and what will be the impact on company image and tacit knowledge? Which competences are WUDQVIHUUHGDQGDUHWKHVHQHHGHGIRUWKHFRQWLQXDWLRQRIWKHFRPSDQ\¶VSUHVHQWEXVLQHVVDQG future prospects? In fact analogous considerations apply to the acquiring company. Can the phenotype and the genotype be effectively separated from the delivering company? How do we avoid that the divesting company uses the divested products and competences to continue contesting the same market opportunity or do so in the near future? How do we assure effective transfer of know-how both in the tangible and the tacit perspective? Can we retain key personnel and assure its effective transfer? All the aspects mentioned above are non-trivial and need to be carefully considered also in the drafting of the contracts covering the deal. Also the prospects of policing the deal need to be considered at this stage. The whole process needs to be truly multidisciplinary and all the vital functions of the company need to be involved. It is far beyond a purely legal or financial exercise. Know how covered by IP is maybe the more tangible of the intellectual assets considered in deals of the nature discussed here. Still issues like transferring patents to the new entity, or exclusive or non-exclusive licensing for only the part of the business transferred, are options that need to be carefully analyzed from both the perspective of the delivering and the receiving entity. The analysis should involve both the present competitive situation and the future prospects regarding foreseen future developments. Patent rights remain in place for 20 years, so this exercise may involve large uncertainties for both the receiving and the delivering party. In addition, the decision process may be under time pressure and still needs to be pursued carefully as well as diligently. As was already stressed acquiring entities or merging with other companies only makes sense if the whole of the new entity is more than its parts; somehow new opportunities to couple to existing needs in the market, new needs not previously accessible or needs new to the world must be spotted to realize the synergies that are necessary to complete a deal that is profitable for both the buyer and the seller. Again asymmetries in information play an important role in this respect. Acquiring activities is often necessary to complement the information set of the company in view of imminent or latent changes in the market in which the company is entrenched or to allow the company to move to territory hitherto outside the scope of the company. Organic entry in new information sets already in existence is often too costly and risky. 153
Economies, markets and industries: Value Transaction Theory
10.11. Strategy development. In modern business practice strategy formulation and implementation is an accepted tool for business and corporate development. What is strategic planning all about? There exist many approaches to strategic analysis and strategic planning. Most companies follow a more or less formalized approach. We use a version of the procedure outlined in Douma and Schreuder (2008) to which the reader is referred for more detail. Fig. 10.15 shows the approach. The process starts with the formulation of goals. Are we happy with what and where the company is today, where do we want to be in the future?
Implementation and Monitoring
Internal Scan
Strategic Choice
Scan Environment
Formulate Goals
Fig. 10.15. An approach to strategic planning. Subsequently, we start performing two types of analyses. Firstly, we develop a perspective on the environment of the company, including but not limited to the competitors. We analyze the external value chain (Section 10.6); perform an analysis of the industry structure, for example using the Porter Cross (Section 10.5). Secondly, we scrutinize the internal value chain (Section 10.7). The objective is to build a model of the environment and tRJHWDSHUVSHFWLYHRQWKHILUP¶s genotype, the information set of the company. We also analyze the competitiveness of the present phenotype, the product portfolio understood in the broad sense we discussed earlier (section 9.7). We develop a future perspective. Which free value forces we presently exploit to produce profit will grow which ones will decay or disappear? What new forces exist or can be created? Where will the competitors move in terms of genotype and phenotype? Large uncertainties remain even if we do the analysis PRVWFDUHIXOO\2XUSLFWXUHRIWKHHQYLURQPHQWDQGHYHQRIWKHFRPSDQ\¶VRZQJHQRW\SHDQGLWV possibilities for development is characterized by irreducible uncertainty and hence a significant statistical entropy. Of course, we use the best brains in the company and we develop a truly multidisciplinary perspective, but significant uncertainties remain. Still we have to agree on a model of reality and use it as a shared guideline for future development. From the model we derive what products need to be developed to optimally couple to the sources of free value in the future environment. From this we derive what needs to be done to our information set. In which direction does it need to be shaped and be allowed to evolve? What instruments do we have available in terms of organic development of the competence set, e.g. by R&D, investments in marketing and new production facilities? Do we need cooperation and what part of the internal value chain need to be involved? Do we need to acquire, do we need mergers? In most cases the 154
Economies, markets and industries: Value Transaction Theory
uncertainties are large and we end up with a list of strategic options, with different actions and, importantly, also different risk profiles, due to the uncertainties inherent in our picture of the complex reality. We cannot afford not to choose from the options identified, certainly if these result in very different actions to be taken with respect to the environment and in the internal information set of the company. We need to make choices. In absence of a choice we do not develop a systematic direction to our actions in the environment and internally. We will not have a consistent plan to direct the evolution in the environment and our company. Without clear selection, evolution will not proceed in the planned direction. No matter the uncertainties of the mental model we need to set a clear direction to provide an arrow to internal and external evolution. Given the uncertainties in our mental model we need milestones to measure whether the internal and external evolution develop as we planned. No doubt there will be deviations inducing changes in the detailed implementation. However, we need to be careful not to deviate too much and too easily from the chosen direction. Also here we encounter the copying fidelity threshold; if we change direction too quickly and too frequently, evolution will not take place in any direction and we will most certainly not end up where we want to be. Also we need, for basically the same reason, to be careful not to overestimate the changes we can organically UHDOL]HLQWKHFRPSDQ\¶V JHQRW\SHLWVLQIRUPDWLRQVHW'UDVWLFDGGLWLRQVRUFKDQJHVZLOORIWHQ require cooperation, merger or acquisition of existing information sets in other entities.
10.12. Economics cycles and fluctuations and the dynamics of systems. In chapter 8, section 8.3, we discussed the stability characteristics of steady states in the strictly linear and in the non-linear non-equilibrium ranges. Steady states can be classified as intrinsically stable or unstable based on an analysis of the excess entropy or statistical entropy production on the perturbation of the steady state by a fluctuation. The dynamics of recovery from a fluctuation that disturbs a stable steady state can be of a varying complexity. This also holds for the transient involved in the evolution to a new steady state once a steady state has become unstable and the system progresses to one of the new stable steady states. To study these phenomena in somewhat more detail we have to turn to the theories of linear and nonlinear system dynamics or the linear and non-linear approaches to dynamic control theory. A word of caution seems appropriate. We discussed the theories of thermodynamics with respect to the evolution of systems in the linear and non-linear regions of thermodynamics. The adjectives linear and non-linear in the theories of system dynamics refer to a different kind of linearity than in thermodynamics. In thermodynamics linearity refers to the relation between flows and forces. In the theories of system dynamics linearity refers to the nature of the differential equations that describe the time evolution of a system. For linear system dynamics those differential equation are linear and this introduces a different kind of linearity than in thermodynamics. In the following, in discussing elements of systems theory, we will adopt linearity with respect to the differential equations that describe the system. The theory of linear system dynamics (Hespanha (2009)) is, not unexpected to those skilled in the art, the most straightforward and well developed one. In linear systems the dynamics of transients turn out to follow a summation of exponential terms. A possible course of evolution for a simple system in which a fluctuation perturbs a stable steady state could follow straightforward exponential decay such as is depicted in fig. 10.16. The perturbation 'Y is seen to smoothly and exponentially decay to zero and the steady state is reestablished.
155
Economies, markets and industries: Value Transaction Theory
'Y
time Fig. 10.16. Dynamics of evolution on perturbation of a simple stable steady state Even for relatively simple systems reality is not always a straightforward as the smooth evolution shown in Fig. 10.16.
'Y
time 10.17. Oscillations in the decay of a fluctuation As an example the evolution may follow the time course depicted in Fig. 10.17.
156
Economies, markets and industries: Value Transaction Theory
In this case the fluctuation decays also, but instead of a smooth decay the system oscillates with a degree of cyclicality and the size of the fluctuations gradually decays with time. We clearly see a cyclical behavior of the fluctuations in the variable under consideration. In fact this simple system exhibits some of the features of the stock market indices that also show signs of fluctuations, albeit far less regular. We briefly return to the general case. For a linear dynamical transient the solution for the decay of a fluctuation follows a summation of exponential terms, the general form is given by: 'Y (t )
¦C e i
Oit
(10.3)
i
Where the Ci are constants, t is time and the Oi are the so-called eigenvalues of the dynamical system. These eigenvalues characterize the rate of decay of fluctuations and the way in which the steady state recovers after a fluctuation. It turns out that for a steady state of a linear dynamical system to be stable and to return to a steady state smoothly without fluctuations around the time evolution of the recovery the eigenvalues must be real numbers. They should not contain an imaginary part. In addition these real numbers all have to be negative; otherwise the steady state is unstable in the first place. If the eigenvalues contain an imaginary part, in this is generally the case certainly if systems become complex, the type of oscillations depicted in Fig. 10.17 develop. These fluctuations show a fixed periodicity, although, again for complex systems, the response consists of a superposition of periodic signals with in principle different periodicity. Still we can exactly predict the timing of the cycles; the periods are constant as time evolves. Do these systems tell us something about the subject matter of this book? To analyze this we briefly turn to the concepts of business cycles and economic cycles as these feature in economic theory. In 1860 the concept of economic cycles probably first appeared in the history of economic thinking, they idea is attributed to the French economist Juglar. His economic cycles had a period of 7-11 years, he did, however not claim rigid regularity (Lee (1955)). Later on Schumpeter and others proposed business cycles or economic cycles of varying periodicity. Some of the cycles that have been proposed are: x The Kitchin inventory cycle with a period of 3-5 years. x ³7KH´EXVLQHVVF\FOHRI-XJODUFRQFHUQing investments in fixed assets, as said with a 7-11 years periodicity. x The Kuznets infrastructure investment cycle of 15-20 years. x The long technological Kondratiev wave of 45-60 years. The governing present day economic theory is not to adhere to the belief that there exist cycles of more or less rigid periodicity. There are, however, economic fluctuations resulting from imbalances in the supply and demand of products and services. These have been assumed to be caused by endogenous causes, processes within the economic system, or exogenous causes from outside the economy. The present day dominant Keynesian approach takes the endogenous perspective (Markwell (2006)), although alternatives, such as the Real Business Cycle theory (Plosser (1989)), exist. The proponents of the Keynesian approach assume failure of markets to smoothly clear due to imbalances in supply and demand, with the causes being alternatively at the supply or at the demand side. Let us get back to the approach to linear dynamic systems as introduced in the beginning of this section. There periodicity arises naturally if the eigenvalues resulting from a linear model 157
Economies, markets and industries: Value Transaction Theory
of the system have an imaginary part. Does this provide a valid vehicle to model economic cycles? If so, we would recover strict predictability as these models predict a number of strictly periodical waves superimposed on the general direction in which the system evolves. This conflicts the present view of economic theoreticians that there are no cycles but rather unpredictable fluctuations. Furthermore, we get into conflict with the VTT approach that was developed in this work. In our model the dynamic behavior of systems is unpredictable as the future is not contained in the present and the past. This holds for the evolution of the system once a steady state has become unstable. It also holds for the way in which an intrinsically stable steady state recovers from a fluctuation. What we can predict is that the system will return to the steady state if that state is stable. The exact way in which it returns to the steady state is unpredictable and may occur in a very complex dynamical way. What is wrong with the well behaved linear systems we introduced earlier in this section? Firstly, we have to realize that the socioeconomic system is far from linear, it is definitely non-linear, but in the system dynamics sense, i.e. the differential equations governing the time evolution of the system in time are definitively non-linear. Could the linear models still be representative for some features of the behavior of the system in the neighborhood of its steady state an approach commonly adopted for a first approximation to the analysis of non-linear systems? Here we have to turn to another problem that adheres to the mainstream approaches to linear and non-linear system dynamics. In system dynamics we assume that the future evolution follows if the initial conditions and the equations describing the evolution of those variables in time, the state equations, are given. Here the real problem surfaces. We stressed that we can only have a macroscopic description of the system. Hence only average values of the state variables are known. The real value of the state variables constantly fluctuates around the average defined by a macroscopic description. One can argue that this is irrelevant as the fluctuations are very unlikely to be large if the systems is complex and can exist in a large number of microstates. This proves beyond the point in the science of complexity perspective, where the so-called butterfly effect reigns. The butterfly effect (e.g. Gleick (1988)) refers to the fact that very small effects, e.g. minute changes in the initial conditions describing the system can, even in the short run, have large effects on the condition in which the system will find itself latter on. The flapping of the wings of a butterfly in the US may cause a tornado to hit China after an unpredictable period of time. This comes back to the very essence of the uncertainty of the macroscopic description resulting in unpredictability of the evolution of non-linear complex systems. In fact the problem hits us already in early stages of complexity, HYHQIRUWKUHHERGLHVPRYLQJXQGHU1HZWRQ¶VJUDYLWDWLRQDOODZVWKHIXWXUHHYROXWLRQFDQLQD longer term perspective, not be deterministically specified. What can we say about economic fluctuations based on VTT? The first thing we can say is that these are bound to happen and that they certainly will not have a rigorously specified periodicity. In addition we can say that there are many endogenous mechanisms that will cause the economy to fluctuate. We do not have to assume a relation with (Devine?) exogenous intervention, although some exogenous effect will have an impact (we can think of the celestial body presumed to have wiped out the dinosaurs, the prospects of global warming, on the other hand are endogenous). Also at the level of causes we can say something. In our evolutionary approach supply and demand are part of a cycle driving the evolution of the system and the distinction between cause and effect is meaningless. What we can speculate is that the simpleminded linear systems approach may be relevant for the analysis of short term effects such as the short term fluctuations in the stock market.
158
Economies, markets and industries: Value Transaction Theory
10.13. Conclusion and prospects. This book proposes and develops an evolutionary approach towards the analysis of economies, industries and markets. Its explanatory power in a qualitative sense is, at least to the opinion of the present author, beyond questioning. The basic idea is not new; evolutionary approaches to economic systems have been proposed ever since the pioneering work of Nelson and Winter (1982). In this work the foundations of such an approach were investigated and traced back to the nature of the macroscopic description of physical systems as it is applied in thermodynamics. Macroscopic approaches take into account that reality is far too complex to be grasped in all its microscopic detail. It is far too complex for a detailed description as we cannot avail of sufficient information to describe the richness of phenomena that appear in reality. Furthermore, obtaining full information on the microscopic level comes at a prohibitive cost. The uncertainty contained in the macroscopic description comes at a cost; the statistical entropy contained in the macroscopic description does not allow harvesting of the full true value that is present in the environment. Only the free value, the true value corrected for statistical entropy and the associated value of information can be obtained. This is the well known distinction between internal energy and free energy as it features in thermodynamics. In thermodynamics evolution of complex structures, so-called dissipative structures, has been reconciled with thermodynamic basics, such as the first and second laws of thermodynamics and the concept of temperature over the last odd fifty years. This development has been VXPPDUL]HGLQWKLVZRUN7RWKHDXWKRU¶VRSLQLRQWKHDSSOLFDWLRQRIWKHEDVLFDSSURDFKEHKLQG non-linear non-equilibrium thermodynamics to phenomena in socioeconomic systems goes far beyond a mere analogy; it is deeply rooted in the macroscopic systems approach to understanding reality. Socioeconomic phenomena are a logical stage in the biological evolution that is fuelled by the non-equilibrium system on earth, where solar radiation forms an important source of true value that is, albeit still limitedly, tapped with increasing sophistication. Certainly from the perspective of qualitatively explaining basic features of our economy such an approach is both necessary and fruitful. As far as quantitative understanding is concerned there remain problems. The macroscopic description comes at a cost; the predictive power of the approach is limited due to the bounded rationality that cannot be avoided in the macroscopic approach. Its basic roots are easily understood. As an example we can go back several millions years when the first ancestors of mankind appeared as a product of the biological evolution on earth. Could we have predicted the development of our industries as we see them today when the first said ancestors appeared? The answer is that even the most sophisticated modeling approach would QRWKDYHDOORZHGWKDW7RGD\¶VXQGHUVWDQGLQJRIHYROXWLRQLVWKDWLWis in itself is unavoidable, it will result under the conditions we discussed in this book and it is a feature of the environment on earth. The exact path that evolution will take is, however, intrinsically unknown and unpredictable. It is the penalty resulting from our limited information. Thus HYROXWLRQ LV DV LW KDV EHHQ SKUDVHG E\ -DFTXHV 0RQRG VXEMHFW WR ³&KDQFH DQG 1HFHVVLW\´ Does this leave us completely powerless? Maybe not, it can be expected that basic features of biological evolution can be expanded to the sphere of our socioeconomic system as we showed in the last chapter of this work. However, a lot of work still has to been done before quantitative understanding will come closer and the ideal of full understanding can never be reached, because of the nature of the theories that are the subject of this book. What can we do, based on the theories developed in this work? To illustrate a possibility, the discussion will be closed with an example in industry: We will return to the process of fermentative production of a basic raw material for semi-synthetic penicillins, the production 159
Economies, markets and industries: Value Transaction Theory
of Penicillin G. As discussed earlier the production of so-FDOOHG ȕ-Lactam antibiotics traces back to the discovery of Fleming in 1928 (Fleming (1929)). The Second World War induced its industrial production in the early 1940s. The varieties of the production organism used in the early days produced only minute quantities of the antibiotic, and the first quantities came at a unit cost prohibitive to its wide spread application. Over the years the production cost of Penicillin G has decreased by a factor of at least 10,000. In the last decade the market price for a kilo crude penicillin fluctuated around or below 20-30 $/kg. This was due to a learning by doing approach towards the development of the competence involved in production of penicillin. As has been documented in the literature (Van der Beek and Roels (1984)) the efficiency of the penicillin production process has improved dramatically over the years. Penicillin is produced in a so-called fed-batch fermentation process in large stirred vessels, called fermenters, in a submerged fermentation of the mould Penicillium chrysogenum. Over the years 1962-1982 the productivity expressed in units per unit fermenter volume and unit time increased almost fivefold. This was amongst others achieved by developing superior varieties of the production organism, varieties devoting more of their metabolic energy to the production of the desired product than the original production organisms at the end of the Second World War. This was based on a kind of evolution, in which random changes were introduced in the genetic information of the production organism, followed by selection of better producing mutants. This artificial evolution, based on very little detailed knowledge about the changes that were introduced in the genetic information, proved to be a very effective strategy. Later on, i.e. following the market introduction of penicillin as an antibiotic, the scientific understanding of the biosynthesis increased and this resulted in new possibilities for optimization of the biosynthesis. However, it became clear that many of the enzymes in the biosynthetic route towards penicillin were already optimized in the process of random selection of superior strains before the details of the biosynthetic pathway were known. This shows the power of random mutation combined with directed selection. In recent years the genome, i.e. the complete information set of the organism at the genetic level has been elucidated by DSM and a more directed way to the optimization of the organism is in principle possible (Van den Berg et al. (2008)). This is, however, far from being a straightforward task because even if the total genome is known, prediction of the full interplay of the enzymatic machinery is beyond the power of the mathematical methods used in the prediction of the production characteristics of genetically modified strains. The example of the development of the penicillin production process over the years as discussed above illustrates an important point. Certainly, the metabolism of the mould responsible for the industrial production of penicillin is far too complex to undertake a directed manipulation of its genome and to predict the impact of this on the production of penicillin in large scale fermenters. The advent of genomics and genome sequencing does allow the identification of all the genes in the organism and in principle allows us, although this is already more tricky, to identify the function of these genes. Prediction of the structure of the cell and its metabolism based on the interplay of the identified gene activities is, however, still far beyond the reach of the present state of the art. The author is also of the opinion that this will still involve a lengthy and costly research agenda and is in doubt whether this will ever be realistically possible. Still we have been able to improve the productivity of the production organism and decrease the cost of producing penicillin by orders of magnitude. This progress was, as stated above, certainly initially, based on wholly random mutation of the genome and clever selection methods, resulting in an artificial evolution of the strains in the desired direction of more economic production of penicillin. $SSDUHQWO\UHVHDUFKHUZHUHDEOHWRDUUDQJHWKLV³VXUYLYDORIWKHILWWHVW´JDPHLQVXFKDZD\ that the course of evolution proceeded in an in the eyes of the industrial players desirable direction. The researchers successfully created an environment in which the evolution took a 160
Economies, markets and industries: Value Transaction Theory
desired course. In this process, as in evolution on earth, most of the mutant strains did not survive the harsh rules of that mutation and selection process. In more recent years the modern know-how in biochemistry and genomics makes, as already indicated, a more rational and directed approach possible, at least in principle, and some successes have been obtained in the industrial research practice. However, many of the rational design approaches lead to improvements already realized in the decades of random mutation and directed selection. In modern approaches to the development of industrial strains clever combinations of rational and random approaches are still the most preferred and productive approach and this will not change in the near and even the remote future. The organism, in the past a true black box with no means to look inside the box, has maybe changed into a grayish box, but the phase of complete transparency and rational manipulation is still far away. We still find us in a situation that in the design of industrial strains we are still, as is the case in the managing of a firm, in a stage where bounded rationality prevails. A significant statistical entropy is still present. The development that was highlighted shows that, in absence of detailed knowledge of the information set, and this is of course always the case in socioeconomic systems, where the analogue of genetic sequencing does not exist, useful strategies can be devised to introduce desired characteristics. This could be a direction in which an evolutionary strategy regarding the socioeconomic system could be useful, although the possibilities for experiment are much more limited than in microbiology, where the introduction of variation only occasionally leads to improved strains. This is hardly an option for the management of a firm as we are not likely to be in a position to repeat an experiment when it is unsuccessful. The question is whether we can devise methods of control and intervention both from the company and the macroeconomic perspective that result in an evolution that is considered beneficial. This involves an uncompleted and challenging research agenda that is beyond the scope of this work. The only thing we have achieved in this book is to develop the mathematical framework to describe systems in cases where our model of reality involves a large statistical entropy.
Fig. 10.18. The evolution of the S&P 500 in the last 5 years The limitation of our picture of the world¶VHFRQRPLFUHDOLW\DQGWKHVWDWLVWLFDOHQWURS\RIWKDW picture is well reflected by recent experience. We have witnessed, or maybe we are still witnessing, an economic crisis that seems to have been predicted by some, but certainly not by the majority of the mainstream experts. Certainly, many players in the stock market did not all foresee this development as is clear from the development of the S&P 500 index as 161
Economies, markets and industries: Value Transaction Theory
depicted in Fig. 10.18. Given the collapse of the index the economic developments were a surprise to many investors. Of course, we know that governments and regulators reacted to this crisis by taking measures geared at restoring our economy. In the jargon of Darwinian evolution they attempted measures to direct the evolution in a more desired direction. At the time of the writing of this book this seems to have been successful (at least it has to date not been clearly detrimental). It is also instructive to note how the experts differed and continue to differ of opinion about the measures that are most likely to be needed. Of course, the problem is not that models are not used to try to get grip on the economy. Models are widely used but are apparently subject to significant statistical entropy.
162
Economies, markets and industries: Value Transaction Theory 10.14. Literature cited.
Aghion, P., P. Howitt (1998). Endogenous Growth Theory, MIT Press, Cambridge (MA) AkerlofI *$ 7KH 0DUNHW IRU ³Lemons´: Quality Uncertainty and the Market Mechanism, Quarterly Journal of Economics, 84 (3), 488-500 Alchian, A.A. (1950), Uncertainty, Evolution, and Economic Theory, Journal Political Economy, 58, 211-221 Alchian, A.A., H. Demsetz (1972), Production, Information Costs, and Economic Organization, American Economic Review 62, 777-795 Demsetz, H. (1973), Industry Structure, Market Rivalry, and Public Policy, Journal of Law and Economics, 16, 1-9 Fama, E.F. (1976), Foundations of Finance, Basic Books, New York Fleming, A. (1929), On the antibacterial action of cultures of a penicillium, with special reference to their use in the isolation of B. influenzæ, Br. J. Exp. Pathol. 10 (31): 226±236 Gleick, J. (1988), Chaos: Making a New Science, Penguin, New York Hespanha, J.P. (2009), Linear Systems Theory, University Press of California. Columbia, CA Hirshleifer, J. (1977), Economics form a biological viewpoint, Journal of Law and Economics, 20 (1), 1-52 Jensen, M.C. (1972), Capital Markets: Theory and Evidence, Bell Journal of Economics and Management Science, 3 (2), 357-398 Jensen, M.C., W.H. Meckling (1976), Theory of the firm: Managerial behaviour, agency costs and ownership structure, Journal of Financial Economics 3(4), 305-360 Klein, B., et al. (1978), Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, Journal of Law and Economics, 21, 297-326 Knight, F.H. (1921), Risk, Uncertainty, and Profits, Hart, Schaffner and Marx; Houghton Mifflin Co, Boston MA Kuhn, T.S. (1962), The Structure of Scientific Revolutions, University of Chicago Press, Chicago Lee, M.W. (1955), Economic Fluctuations, Richard D. Irwin, Homewood, IL Lewin, R., and R.A. Fowley (2004), Principles of Human Evolution (2nd Edition), Blackwell Publishing, Oxford Markwell, D. (2006), John Maynard Keynes and International Relations: Economic Paths to War and Peace, Oxford University Press, New York Monod, J. (1971), Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology, Alfred A. Knopf, New York, Nicolis, G., and I. Prigogine (1977), Self-Organization in Nonequilibrium Systems, Wiley&Sons, New York, p. 459-461 Plosser, C.I. (1989), Understanding Real Business Cycles, Journal of Economic Perspectives, 3, 51-77 Porter, M.E. (1980), Competitive Strategy: Techniques for Analyzing Industries and Competitors, Free Press, New York Porter, M.E. (1981), The Contribution of Industrial Organizations to Strategic Management, Academy of Management Review, 6, 609-620 Porter, M.E. (1985), The Competitive Advantage, Free Press, New York Prigogine, I. (1980), From Being to Becoming, W.H. Freeman and Co, San Francisco (1980) Prigogine, I. (1980), From Being to Becoming, W.H. Freeman and Co, San Francisco (1980) Prahalad, C.K. and G. Hamel (1990), The Core Competences of the Corporation, Harvard Business Review, 68 (3), 79-93 Roels, J.A. (1983), Energetics and Kinetics in Biotechnology, Elsevier Biomedical Press, Amsterdam 163
Economies, markets and industries: Value Transaction Theory
Romer P., Endogenous Technological Change, Journal of Political Economy 98, S71-S102 Roussel, P.A., et al. (1991), Third Generation R&D: Managing the Link to Corporate Strategy, Harvard Business School Press, Boston (MA) Simon H.A. (2005), In K. Dopfer (Ed.), The Evolutionary Foundation of Economics, Cambridge University Press, Cambridge (UK), Chapter 4 Solow R.M., A Contribution to the Theory of Economic Growth, Quarterly Journal of Economics 70, 1, 65-94 Tinter, G. (1941a), The Theory of Choice under Subjective Risk and Uncertainty, Econometrica 9, 298-304 Tinter, G. (1941b), The Pure Theory of Production under Technological Risk and Uncertainty, Econometrica, 9, 305-322 Van den Berg, M. A., et al. (2008), Genome Sequencing and analysis of the filamentous fungus Penicillium chrysogenum, Nature Biotechnology, 26 (10), 1161-1168 Van der Beek, C.P., and J.A. Roels (1984), Penicillin Production: biotechnology at its best, Antonie van Leeuwenhoek, 50, 625-639 Williamson, O.E. (1975), Markets and Hierarchies : Analysis and Antitrust Implications, Free Press, New York
164
Glossary of salient terms arrow of time asymmetries in information
autocatalytic, autocatalysis balance equation bifurcation Boltzmann constant bounded rationality Capital Asset Pricing Model capital assets capital market line Carnot cycle Chance and Necessity
Competences Competition complex, complexity conserved quantity copying fidelity
cost of information coupling coupling, degree of deoxyribonucleic acid, DNA dissipation dissipative structures distribution function
time to the future, i.e. in a defined direction, it is the direction in which entropy production is positive differences in information between the actors in the system, these asymmetries drive transactions and are the driving force behind profitable transactions processes involving entities that enhance their own production equations for the rate of change of the amounts of extensive quantities point at which states become unstable and various options for the future development of a system become possible constant relating the absolute temperature to the cost of information about the microstate of the system uncertainty in decision making in complex system as a reflection of ƚŚĞƐƚĂƚŝƐƚŝĐĂůĞŶƚƌŽƉLJŽĨƚŚĞŝŶĨŽƌŵĂƚŝŽŶĂďŽƵƚƚŚĞƐLJƐƚĞŵ͛ƐƐƚĂƚĞ financial model for the analysis of investments subject to uncertainty about their future return financial assets such as stocks and bonds and liquid capital relation between expected return and investment risk according to the Capital Asset Pricing Model cyclical process for the transformation of heat into work statement of Jacques Monod reflecting that under the right conditions evolution will take place but we remain uncertain about its exact course due to the random nature of the mutations that drive evolution complex integration of technologies and other routines that provide a durable competitive advantage, generally difficult to copy struggle for sources of free energy or free value resulting in selection of the most successful actors, a prerequisite for evolution system that require a large amount of information to specify their exact microstate state variable not subject to change in transformation processes or transactions reliability of the copying of information, must be above a well defined limit, depending on the amount of information, to prevent the message from disintegration in copying ĞŶĞƌŐLJŽƌǀĂůƵĞĐŽƐƚƐŽĨŽďƚĂŝŶŝŶŐŝŶĨŽƌŵĂƚŝŽŶĂďŽƵƚƚŚĞƐLJƐƚĞŵ͛Ɛ microstate mechanism by which an entity extracts free energy or free value from a force in the environment. extent to which an entity couples to a source of free energy or free value, constrained between 0 and 1 chemical entity containing the genetic code in biological systems degradation of the quality of sources of energy and value in transformations and transactions ordered structures that maintain their integrity by feeding on sources of free energy or free value function describing dependence of the probability of a microstate on the value of a variable
165
Downhill reaction
reaction proceeding in the natural direction of decreasing free energy or free value efficient market market in which all actors have the same information and all use it in the most effective way energy the potential capacity of a physical system to do work, some of the potential cannot be made free due to the statistical entropy of the macroscopic picture of reality enthalpy alternative formulation for energy, used if changes in the sysƚĞŵ͛Ɛ pressure an volume are taken into account entropy thermodynamic state variable reflecting the statistical entropy of the macroscopic description; the second law of thermodynamics implies that for an isolated system it can only increase. entropy production production of entropy in the processes that take place in a system the second law states that is positive or zero entropy production, minimum principle stating that in a steady state in the strictly linear region beyond thermodynamic equilibrium the entropy production reaches the minimum possible in view of the constraints that are applied equilibrium refers to thermodynamic equilibrium or equilibrium in an economy error threshold copying fidelity limit for an information code; if the error rate of copying surpasses this threshold the code will disintegrate evolution, biological the time sequence of the development of biological systems under the forces of mutation and selection, also termed Darwinian evolution evolution, economic the time sequence of the development of an economy under the pressure of learning by doing and selection evolution, exogenic the development of biological or socioeconomic systems based on the development and processing of information beyond the genetic code in RNA or DNA evolution, prebiotic the phase of the evolution on earth before the appearance of the first DNA, RNA based organisms evolution, sustained the phenomenon that in systems beyond a sufficient level of complexity evolution while continue for an indefinite period of time resulting in increasingly sophisticated systems evolutionary economics a branch of economic theory applying the evolution metaphor to the socioeconomic system evolutionary feedback an increase of the force driving evolution by the organisms and organizations that develop in the process of evolution evolutionary systems theory a body of theory on the evolution of all systems beyond a certain limit of complexity extensive quantities systems that can be added up with respect to parts of the system extremum maximum or minimum of a variable first law of thermodynamics thermodynamic law stating that energy is a conserved quantity, i.e. it is not produced or degraded in transformation processes first law of VTT value transaction theory law stating that (true) value cannot be produced or destroyed in transactions in a system; analogue of the first law of thermodynamics fluctuation random change in a variable characterizing a system force gradient in the ratio of free energy to temperature (thermodynamics) or free value to cost of information (VTT), it drives transformations and transactions force ratio normalized ratio of the force at the output side to the force at the input side of the linear free value transducer
166
force, conjugate force, non-conjugate free energy free value free value transducer free value transducer, linear free value transducer, non-linear free value, gradients in gene general evolution criterion
force driving its own transformation or transaction force driving a transformation or transaction through a coupling process energy corrected for the product of cost of information and statistical entropy (true) value corrected for the product of cost of information and statistical entropy device producing free value by coupling to force in the environment free value transducer operated in the linear region free value transducer operated beyond the linear region
force driving economic transactions stretch of DNA coding for one defined protein restriction defining the direction of evolution in a system genetic code the information comprised in the DNA of an organism genome the DNA code of an organism, metaphorically the information set underlying an organization genotype the information set defining an organism or an organization Gibbs free energy free energy heat form of energy reflecting the random thermal movement of molecules, also a reflection of the statistical entropy of the macroscopic description hypercycle closed cyclical arrangement of information codes and corresponding functional entities information, cost of, value of ĐŽƐƚŝŶĞŶĞƌŐLJŽƌǀĂůƵĞƵŶŝƚƐŽĨŝŶĨŽƌŵĂƚŝŽŶĂďŽƵƚƚŚĞƐLJƐƚĞŵ͛Ɛ microstate, the cost is at best equal to its value information A message received and understood that reduces the recipient's uncertainty about the microstate of a macroscopic system information asymmetries differences in information; these are the driver of economic transactions information set the collective information for an organism or an organization information theory a body of theory quantifying information and describing its communication information work energy or value expenditure to obtain information information, bits unit to quantify information, 1 bit is the information needed to discriminate two equally likely events information, captive information uniquely owned by an actor isolated system not exchanging matter or energy with its environment kinetics, chemical body of theory describing the rate of chemical transformations in terms of the concentrations of reactants and products kinetics, mass action law kinetics scheme in which rates are expressed in terms of concentration to the power of the number of moles of reactants and products participating in the reaction life cycle, industry life cycle the development of a subject in time linear phenomenological eqns linear equations for the description of the rate of transactions or processes in terms of the forces linear region region beyond equilibrium where relations between flows and forces are linear, also strictly linear region if the reciprocity relations hold macroscopic a modeling approach in which the microscopic complexity of real systems is not taken in account
167
macroscopic branch
evolution beyond equilibrium where macroscopic theory is adequate to describe the evolution of a system maintenance dissipation dissipation of energy or value to sustain a dissipative structure maintenance energy alternative expression for maintenance dissipation Maxwell Demon genie able to avoid paying the cost of information microeconomic theory classical equilibrium economic theory microscopic modeling approach taking into account details of the complexity of real systems microstate, microscopic state one defined state consistent with the initial information about the system mutability information on which a functional structure is based changes randomly or directed in the process of copying mutant information set or functional structure resulting van changes in the copying process mutation change in the information set niche one defined opportunity to couple to a source of free energy or free value non-equilibrium state beyond thermodynamic or economic equilibrium non-linear absence of linearity of the relation between flows and forces organizational economics body of theory describing the relation between organizations and economic theory organizational inertia the phenomenon the organizations resist change oxidative phosphorylation The coupling of the oxidation of energy sources to ATP formation in biological systems P/O ratio stoichiometry of the coupling of oxidation to ATP formation in biological systems perfect competition economic transactions in which actors avail of the same information and all use it optimally phenomenological coefficients coefficients of coupling of flows to forces phenomenological equations equations describing the relation between flows and forces phenotype the functional structure derived by translation of the genotype photosynthesis the process by which energy sources are produced in green plants power output production of free value or free energy per unit time probability the likelihood of a variable having a given value probability density function the relation between the probability and the value of the variable, also probability distribution function quasi-species a linear combination of the amounts of similar information sets, it multiplies at a time independent rate rate equations equations describing the relation between rate and forces rates change of a variable per unit time reciprocity relations equality of the phenomenological coefficients of coupling of fore i to flow j to that for coupling of force j to flow i replication multiplication of an information set or an organism resources sources of energy, value or matter that can be made available from the environment reversible process a generally slow change in the state of a system in which no statistical entropy production takes place risk the product of uncertainty and exposure risk free value a source of free value with zero statistical entropy scarcity, scarce resources limitations to the availability of sources of free value resulting in competition between actors for those resources
168
second law of thermodynamics thermodynamic law that states that processes produce entropy second law of VTT VTT law stating that statistical entropy is produced in transactions selection the process resulting in the selection of the superior information sets in evolution selection value the competitiveness of a mutant information set self organization the mechanism by which structures spontaneously develop in nonequilibrium systems socially optimal an economic infrastructure that is macro economically optimal in terms of resource allocation specialization, division of labor a process of specialization of actors and organizations on a part of the process of producing economic goods stability a state that does not show macroscopic change due to fluctuations stability of steady states states that do no longer change in time from a macroscopic perspective and resist change due to fluctuations stability threshold, threshold on the magnitude of a force beyond which states are no longer intrinsically stable, also critical point state function macroscopic variable that only depends on the macroscopic state of the system and does not depend on the way in which that state was reached state variables variables defining a macroscopic state statistical entropy information that is lacking to pinpoint the microstate of the system in view of the available macroscopic information statistical entropy production production of statistical entropy in transformations and transactions steady state state in which macroscopic state variables have become time independent tacit knowledge information stored in an intangible way team production production of economic goods by cooperation between actors temperature thermodynamic state variable related to the energy cost of ŝŶĨŽƌŵĂƚŝŽŶƌĞŐĂƌĚŝŶŐƚŚĞƐLJƐƚĞŵ͛s microstate thermodynamic equilibrium a system state in which no processes of a macroscopic nature occur thermodynamics, irreversible thermodynamics of systems in which macroscopic processes take place thermodynamics macroscopic thermodynamic approach ignoring the microscopic complexity of a system thermodynamics, statistical thermodynamic theory based on the statistics of the system microstates transformations all processes taking place in a system translation deriving the phenotype from the genotype transport change of the amount of value, energy or material by exchange with the environment uncertainty inability to predict future developments due to information limitations uncertainty, irreducible uncertainty that cannot be removed based on the information of an actor uncertainty, reducible uncertainty with which the information of an actor can cope Uphill process process against the direction prescribed by the second law value free risk generalized heat value microstate one identified microscopic state defining the value of the system, such state has zero statistical entropy value transducer system that couples to a force to generate free value work source of energy or value with a statistical entropy of zero
169
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank