This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & F...
32 downloads
668 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” Published by: The Watt Committee on Energy Ltd 75 Knightsbridge London SW1X 7RB Telephone: 01–245 9238r © 1981 The Watt Committee on Energy Ltd
Dajon Graphics Ltd, Hatch End, Pinner, Middx. 9–81 ISBN 0-203-21025-5 Master e-book ISBN
ISBN 0-203-26810-5 (Adobe eReader Format)
ii
REPORT NUMBER 10
FACTORS DETERMINING ENERGY COSTS AND AN INTRODUCTION TO THE INFLUENCE OF ELECTRONICS This is an omnibus report. The first part covers ªInfluences Upon Basic Costs and Prices of Primary Energyº. It presents the assessment of a Watt Committee working group and is largely in qualitative terms. The work was initiated to give ETSU another view as to future trends in the costs of primary energy production. The second part is complementary since electronics increasingly impacts on both the methods of procurement of energy and its effective utilisation. Otherwise, the parts are wholly separate with the second being introductory. The authors of the papers now presented in Part 2 include 5 of some 30 who will be lecturing at a two-week Watt Committee Summer School on ªEnergy and Ele ctronicsº being he ld at the University of Reading in September 1981. The views expressed are those of the authors and not necessarily those of their companies or institutions.
The Watt Committee on Energy Ltd A Company limited by guarantee: Reg. in England No. 1350046 Charity Commissioners Registration No. 279087 SEPTEMBER 1981
Contents
Page Foreword
iv
Member Institutions
v
PART 1 INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY Introduction
3
Oil
4
Coal
5
Gas
8
Uranium
14
Electricity
16
Conclusions Assembled by P.T.Fletcher, CBE, Working Group Chairman, on behalf of the Group
20
Some approximate conversion factors
23
APPENDIX: Energy costs report A.Cluer
24
PART 2 ENERGY AND ELECTRONICS: An introduction
39
The influence of electronics C.W.Banyard
40
Fundamentals of computer systems A.J.Findlay
48
Electronics and energy Dr G.R.Whitfield
55
Use of computers in air traffic control with particular reference to evaluation and planning Dr R.Burford
66
Electronic controls for refrigeration plants with particular reference to energy savings and supermarket applications J.Schmidt
81
Wind power applied to domestic requirements and some related electronic problems Professor A.L.Fawe
88
WATT COMMITTEE POLICY AND EXECUTIVE SEPTEMBER 1981
94
WATT COMMITTEE REPORTS
Foreword
Two of the various threads linking the many activities of The Watt Committee have been its concern with in-depth objective assessment by professionals, and the stimulation of new thinking. These characteristics derive from the very origins of The Watt Committee in the originality of the concept first put forward by Sir William Hawthorne, and the progressive involvement of a widening circle of professionals through their membership of one of the member institutions of The Watt Committee, listed on page iv. The first part of this report ªInfluences upon Basic Costs and Prices of Primary Energyº has strong links with earlier Watt Committee work, and particularly with report No. 2 ªDeployment of National Resources in the Provision of Energy in the United Kingdom, 1975±2025º. It is an example of the consistent aim of The Watt Committee to probe subjects in-depth by building upon earlier work, and the setting up of the working group arose out of discussions with ETSU subsequent upon the publication of report No. 2. Indeed, there has been a measure of financing for The Watt Committee as a result of this continuation for which we are most appreciative. Unlike many of our other reports, this present study is qualitative rather than quantitative. This derives from the level of commercial confidentiality required among the various organisations from which people competent to undertake such a study must inevitably be drawn. I know that Paul Fletcher and members of his working group, who gave a great deal of voluntary effort, view this paper as a step along the road towards a more extensive dialogue of the various influences bearing upon costs and prices of primary energy. It is The Watt Committee's intention to ask members of selected institutions to take a lead in progressing this dialogue to a further stage. While the first part of this report is significant in its development of earlier work, the second part on ªEnergy and Electronicsº is equally significant for its heralding of a more complete assessment of the effects of electronics on the various aspects of energy. It was in Autumn 1979 that Cyril Banyard gave a paper to a Council of Europe discussion meeting at which he suggested ªEnergy and Electronicsº had such inter-related aspects as to form a subject for study. This having been taken up by the Council of Europe, we of The Watt Committee have been pleased to arrange with them a Summer School at Reading University in September of this year at which there will be two weeks of lecture and discussion covering many aspects of the subject. We are following this by making ªEnergy and Electronicsº the topic for our next Consultative Council in December when a small selection of the lecturers at the Summer School will be given the opportunity to make their points to an audience drawn from all our member institutions. This work being done on ªEnergy and Electronicsº is again voluntary. We are deeply grateful to the individuals concerned, to their businesses and institutions who meet any related expenses, and to our voluntary Executive for the co-ordination of this along with so many other diverse efforts. J.H.Chesters Chairman, The Watt Committee on Energy
MEMBER INSTITUTIONS June 1981 †
*Association of Home Economists British Association for the Advancement of Science British Ceramic Society *British Nuclear Energy Society * Chartered Institution of Building Services Chartered Institute of Building * Chartered Institute of Transport * Combustion Institute (British Section) Geological Society of London Hotel Catering and Institutional Management Association * Institute of Biology Institute of Ceramics * Institute of Cost and Management Accountants * Institute of Energy Institute of Food Science and Technology Institute of Foresters Institute of Hospital Engineering Institute of Internal Auditors (British Chapter) * Institute of Marine Engineers * Institute of Petroleum * Institute of Physics Institute of Management Services Institute of Purchasing and Supply Institute of Quantity Surveyors Institute of Refrigeration Institute of Solid Wastes Management Institution of Agricultural Engineers *Institution of Chemical Engineers * Institution of Civil Engineers *Institution of Electrical and Electronics Technician Engineers *Institution of Electrical Engineers
Institution of Electronic and Radio Engineers Institution of Engineering Designers * Institution of Gas Engineers * Institution of Mechanical Engineers Institution of Metallurgists * Institution of Mining and Metallurgy * Institution of Municipal Engineers * Institution of Nuclear Engineers * Institution of Plant Engineers * Institution of Production Engineers Institution of Public Health Engineers Institution of Structural Engineers Institution of Water Engineers and Scientists International Solar Energy Society Metals Society Operational Research Society * Plastics and Rubber Institute * Royal Aeronautical Society Royal Geographical Society * Royal Institute of British Architects *Royal Society of Chemistry Royal Society of Health *Royal Institution * Royal Institution of Chartered Surveyors Royal Institution of Naval Architects Royal Meteorological Society * Royal Society of Arts * Royal Town Planning Institute * Society of Business Economists Society of Dyers and Colourists Textile Institute
* Denotes present and past members of The Watt Committee Executive (Page 77) †Additional member August 1981: Institute of Mathematics and its Applications
Part 1: Influences upon basic costs and prices of primary energy
© Crown Copyright 1981 The copyright of Part 1 of this report is assigned by The Watt Committee on Energy Limited to H M Government and the section is the property of HM Government. The Department of Energy has provided financial support for the work. The section is being released for wider public discussion as part of the open debate on Energy Research and Development.
Part 1: Influences upon basic costs and prices of primary energy
MEMBERS OF THE COSTING WORKING GROUP:
C.W.Banyard,* BOC Limited Institute of Cost & Management Accountants J.Claret, Knight Management Services Ltd Chartered Accountant A.Cluer,* Consultant Institute of Petroleum F.C.Colmer, CEGB Institution of Electrical Engineers P.T.Fletcher, CBE,** Atomic Power Constructions Ltd Institution of Mechanical Engineers Dr J.Gibson, NCB Institute of Energy R.S.Hackett,* British Gas Corporation Institution of Gas Engineers R.Rutherford, National Nuclear Corporation Ltd The Working Group wishes to acknowledge the contribution made by Mr M.J.Parker for Dr Gibson and Mr F.Jenkin for Mr Colmer as well as the personal assistance given to the Chairman by Mr C.S.Lothian of the National Nuclear Corporation Ltd and Mrs P.R.Hodkin of Atomic Power Constructions Ltd.
* Executive member of The Watt Committee on Energy **Working Group Chairman
Influences upon basic costs and prices of primary energy
INTRODUCTION 1. The working group has noted that the worldwide demand for primary energy has been met almost entirely by fossil fuels, of which today, oil and gas provide more than 60%. Oil and gas have best satisfied requirements for domestic and industrial heat and transportation to the extent that the price of oil and natural gas now has a profound effect on national and world economies. Even a low projection of long term growth of world demand suggests that the world cannot rely on oil and natural gas to sustain growth because of depleting reserves. Distribution of oil and natural gas in the world is uneven. At the margins price may be related to the cost of extraction, but the role of the OPEC member states in fixing oil prices is a consequence of political and commercial power associated with possession of the world's major oil reserves in an expanding market. In that situation, the price of oil has become, and can be expected to remain, unrelated to the cost of extraction. 2. The UK is currently favoured in that it also has access to natural crude oil and gas in the North Sea, but at a relatively high cost compared with other oil sources. There is no oil that can be made available at a cost approaching that of natural crude oil provided by nature in the prolific oil fields of the Middle East, a cost which until 1973/4 was reflected in a competitive selling price and rapidly expanding production. It is now generally accepted that the OPEC member countries will continue to limit production and raise prices, and that decisions in these directions will be liable to ªpolitica lº disturbances. The question considered by the working group is what, in qualitative terms, might influence the choice of alternative sources of primary energy for the UK and possible dates for their commercial adoption. 3. Having regard to the dominant role of oil products and the general view that world and also UK access to natural crude oils and gas will become resource limited, it appeared to the working group that its considerations would best be related to a review of the oil and gas scene, both historically and projected into the future. This review prepared by Mr Cluer (Institute of Petroleum) in August 1980 is incorporated with this report. 4. However, in the UK, primary energy is also derived from coal and natural gas and to a much lesser extent uranium and hydropower with a substantial proportion transmitted as electricity. There is competition especially between electricity and gas for the convenience fuels market and between coal and uranium for the generation of electricity. It is to be expected that the industries involved in extraction, conversion and distribution of energy will have differing views on how demands can and should be met, based on their own judgement of what they have to offer. There is no doubt, however, that in the UK there is universal recognition that conservation of energy is essential and that oil and gas should be directed to satisfying the premium markets, while coal and nuclear power should be the basis of electricity generation and coal the basis for industrial steam raising. Other renewable sources of energy, such as wind and waves, are unlikely to make major contributions in the foreseeable future, if ever. 5. The UK cannot be separated from the world scene, since for a long time, the price of Middle East crude oil will influence the cost at which alternative sources of oil and other primary energies must be made economically available to compete in domestic and world markets. For instance, a unified OPEC price structure of 40 to 50 per barrel (1980 money) might, given adequate confidence, open up in economic terms additional resources of heavy oils, tar sands, oil shales and hydrocarbon liquids from coal and coal could become a world traded commodity on a very much increased scale. Very large investments would be needed which would have to be supported by substantial industrial and technical resources and lead time would be considerable. Since UK possesses significant resources of both coal and oil and is a highly industrialised country capable of their exploitation, it could be exposed to pressures to develop and share these resources.
4
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
Figure 1 Flow of energy from primary sources
6. During its discussion, the working group has therefore taken note of the following possibilities:± In the UK scene: Further exploration of the continental shelf Development of gas recovery from North Sea fields Increased refining of crude oil to meet premium needs Improved techniques for coal burning, such as fluidised bed combustion and direct turbine drives Increased coal mining capability Coal/Oil/Gas conversion Extension of nuclear power, including breeding. In the world scene, as likely to have a modifying influence on OPEC price policy: Extraction from heavy oil, tar sands and shales Coal/Oil conversion International trading of coal International trading of natural gas. OIL 7. The following paragraphs are based on observations, stimulated by Mr Cluer's report, by members of the working group with experience of the problems of planning and investment in the other related energy industries. Mr Cluer's report appears in full in the Appendix (page 21) and should be studied at this stage. 8. In the UK, our consumption of crude oil passed through a 1973 peak of 113 million tonnes, declining to 94 in 1979Ð with natural gas rising to maintain a near constant total equivalent to 135 million tonnes. The indication is that whilst the higher prices which have developed are beginning to be absorbed by consumers, oil consumption has fallen by 15% from 1979 to 1980. Further, the UK consumption is now matched by North Sea production. This consumption pattern has to be considered against published information on proven, probable and possible reserves which aggregate 2,400 million tonnes.
THE WATT COMMITTEE ON ENERGY
5
The conclusion is that at the foreseeable rate of consumption, and given that expectations of future discovery and production are realised, there may be 15 to 20 years of self-sufficiency. There is clearly a need to consider what part of the demand could be economically relieved by other energy sources. In 1978/9, 42 million tonnes of oil products were used by road, sea and air transport and for chemical feed stocks, whilst 52 million tonnes went to industrial and domestic process heat and electricityÐthe la tter taking about 12 million tonnes. Within the UK there is the possibility of relieving the pressure of demand for oil by substituting other energy sources to supply predominantly industrial needs, of which electricity generated from coal and nuclear power is immediately available. Energy conservation can also make an important contribution, but at the best these together can only relieve part of the demand and defer the date when oil imports again become necessary. COAL 9. Whilst the principal factors which will determine the future trends in the costs and prices of UK coal are grouped under a number of main headings, it should be emphasised that inevitably such categorisation has an element of artificiality in view of the interaction between the various factors. ‘Structural Change’ 10. At present, the coal industry has a significant proportion of old, low-productivity capacity at which technical progress is very difficult to achieve. The rate at which this old capacity reaches the end of its realistic reserves will be one of the factors influencing the industry's cost structure. 11. On the other hand, reserves are available to support very high productivity in new mines and major developments at existing long-life mines. At present, the industry has some 7 billion tonnes of `operating' reserves (fully proved and accessible) at existing collieries, with some further reserves sufficiently proved to develop over 20 million tonnes of annual capacity in the current new mine project programme. Only a small proportion (some 3 billion tonnes) of the ultimately recoverable reserves would need to be fully proved and upgraded into `operating' reserves, to sustain the industry's expansion programme to the end of the century. Furthermore, experience to date with the exploration programme indicates that the new developments can take place in mining environments which will allow much higher productivities than is possible in the generality of existing collieries, and with costs (including capital charges) lower than much of existing capacity. In these circumstances, with new capacity `intra-marginal', the industry's overall cost trend will ultimately benefit from a high rate of investment, coupled with the effective exhaustion of old capacity. Thus environmental planning constraints, which might affect the rate at which new capacity could be introduced, have a potential cost effect also. Restrictions in the overall availability of capital which led to a lesser rate of `restructuring' in the industry would also have a similar effect. 12. The NCB's `Plan for Coal' investment programme, initiated in 1974, has secured the industry's capacity to the mid-1980s. However, in an extractive industry with much old capacity, and with long lead-times to bring in new capacity, work is already being undertaken on projects which will not fructify until the late 1980s. The continuity of this programme will have a significant effect, not only on the industry's output i n the longer term, but also on its efficiency and costs. 13. For the foreseeable future, the cost structure of the UK and industry will be determined by deep-mined operations, as the proportion of opencast coal is limited by both geological and environmental considerations to around 10% of total output. Manpower Costs 14. Manpower accounts for about half of total deep-mining operating costs, being determined in turn by productivity per man, and the cost per man employed. 15. Productivity in terms of output per man is a crucial parameter in the UK coal industry. It will be progressively increased by the structural changes planned in the industry (paras. 10 to 12 above). The industry's average productivity is 21/4 tonnes a manshift, but this conceals a wide range, both actual and potential. The oldest collieries nearing exhaustion of their effective reserves often have productivity half the national average, whereas the best performances are twice the current national average, and even higher levels are planned for the new mines in the current investment programme. The `restructuring' effect on productivity can therefore be substantial. 16. In parallel with the productivity benefits of a new pattern of primary capacity, improvements in mining technology are in progress, in particular at the coalface, where the bringing together of the most advanced technology, associated with the use of `heavy-duty' roof supports, has made possible coalface outputs of 2,000 tonnes a day or more, compared with the current national average of some 700 tonnes. The progressive introduction of these high performance faces will have a dramatic effect on manpower productivity at the coalface. High face outputs also can have a beneficial effect on overall productivity, and to assist this process, computer-based face monitoring arrangements, with control of the underground conveyor systems,
6
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
are being introduced to facilitate greater continuity of coal production, reduce delays, and thus release the full potential of the coalface machinery now available. 17. The cost per man employed will be determined by the trend in miners wages relative both to changes in the value of money, and to average wages in the economy as a whole. Non-wages costs (such as National Insurance contributions) and benefits (such as pensions) need also to be taken into account. 18. During the next 20 years or so, the `trade-off' between productivity and wages will occur within the context of extending, principally by investment, the best of existing technology more widely through the industry. More revolutionary technologies (such as in situ gasification) are worth R&D study, both nationally and internationally, but are unlikely to have a significant effect this century. Provisioning Costs 19. `Provisioning' in the widest sense (including materials, repairs, plant and machinery, and other direct operating expenses), currently account for about a third of total deepmining operating costs. In recent years, these have tended to increase in `real' terms, partly because of the initial high cost of technical improvements, and also because of the downward trend to total industry outputÐthere being a significant `overhead' element in this category of expenditure. Thus, the future trends in provisioning costs per tonne will be ameliorated by the planned increases in output and productivity referred to above. Capital Charges 20. Currently, capital charges are relatively low in the UK coal industry. Historic cost depreciation represents only some 3% of total mining operating costs, while the totality of depreciation and interest charges represent about 10% of turnover. This situation arises partly from the relatively manpower-intensive nature of deep-mining, but also as a legacy of the serious underinvestment in the industry in the 1960s, when international oil was cheap. Further, there were substantial write-offs of assets prior to the 1973/4 oil crisis, which meant that the coal industry embarked upon `Plan for Coal' with very low depreciation provisions, and therefore very low levels of internal financing of the investment programme. Under the current methods of financing, with all the resultant borrowings at fixed rates of interest (with no equity capital), this situation has led to a rapid rise in capital charges, particularly interest, and this is likely to continue for some time into the future, unless the present financing arrangements are changed. In the longer term, however, the industry should move nearer to a state of equilibrium, so that the incremental effects on total costs will be significantly reduced. 21. The above description should be distinguished from the appraisal of the capital programme, under which the industry is required to show a 5% D.C.F. return in `real' terms on the investment programme as a whole. However, it is in the nature of a D.C.F. appraisal (particularly with a programme with a significant long lead time element) that the relationship with accounting costs in any particular year should be indirect. Transport of Coal 22. The foregoing paragraphs have dealt principally with the factors affecting deep-mined costs. The delivered price of coal is also affected by transport costs. In the main market, power stations, these amount to some 8% of the delivered price, but that proportion tends to be higher for smaller deliveries to the industrial and domestic sectors. The major development over the last decade has been the extension of `merry-go-round' trains, usually associated with rapid loading installations, which have greatly increased the efficiency of railway movement of coal. This process is likely to continue, and to make a major contribution to stabilising transport costs. The principle, using centralised depots, could be extended to the industrial market; and improved loading facilities have improved the economics of short-haul road transport. Although the economics of new methods of transport (e.g. pipe lines) are being considered, these are unlikely to make a significant impact on the overall costs of transporting coal in the foreseeable future. In total, therefore, the changes in the pattern and method of coal transport over the next 10±20 years is unlikely to have a crucial effect on the overall delivered cost of coal. Relationship between Costs and Prices 23. The relationship between coal costs and prices is complex and interactive. The value of coal is greatly conditioned by the price of international oil, since it is international oil that continues to be the world dominant and balancing energy supply. The oil crisis of 1973/4 was therefore a turning point for the UK coal industry, not only for its immediate effect on the price of competing oil, but also because of the changed expectations with regard to energy supply generally. Subsequent events in the Middle East have reinforced the view that oil will become progressively scarcer and more expensive in `real' terms, thereby increasing the value of alternatives including coal. In turn, this makes possible coal capacity investments and encourages coal
THE WATT COMMITTEE ON ENERGY
7
technologies which would not have been considered viable in the 1960s. In a very real sense, therefore, the price coal can command in future, influences the marginal cost level which is acceptable, which is the inverse of the view that marginal costs determine prices. 24. Under certain circumstances, UK coal prices could also be influenced by the price of coal in international trade. The trade in steam coal is likely to increase significantly as one of the means of mitigating the world oil problem, reflecting the fact that there are some countries (particularly Australia and South Africa) with potential to expand production substantially above their domestic requirements. In addition, the United States have the reserves to support a substantial coal export market, but other considerations may in fact constrain the growth of trade of American coal. Steam coal trade is currently 50 m. tonnes a year, and this could rise to 500 m. tonnes a year by 2000. However, the price at which such coal would be made available is problematical since:± (i) Even a greatly expanded world steam coal trade would be unlikely to be sufficient to influence the world oil price; (ii) It is uncertain how far long-term contractual arrangements will overcome the inevitable destabilising factors which occur from time to time, and how much competition there will be between coal producers; (iii) The amount of world coal traded will be relatively marginal to the total production and use of coal world-wide; (iv) The price coal can command will be influenced by the supply and demand position of all other primary fuels in the bulk heat market; (v) While the growing world oil problem will tend to increase the global demand for coal, this will need to be matched with the provision of coal-burning equipment (particularly power stations) and this may become `out-of-phase' from time to time. In other words, prices will tend to fall in a `buyers' market and rise in a `sellers' market. In the latter case, internationally traded coal prices would tend to follow the rising price of international oil. So far as the UK is concerned, there is a further element of uncertainty in the sterling exchange rate and balance of payments which would have a significant effect on the price of imports. 25. Subject to the possible influence of world coal prices, the long-run price of UK coal will be influenced predominantly by the price of oil. However, there are a number of actual and prospective markets for coal within which the price of oil will have a differential impact. In the coking coal market, where coal has useful properties over and above calorific value, coal is strongest against oil, and UK coking coal prices are likely to be much more influenced by the state of the steel industry and the price of coking coal imports, than by the price of oil. However, for steam coal, coal's inherent disadvantage against oil is most modest in the power station market and in bulk combustion for general industry. For this reason, the power station market tends to give the highest `net-back' to the coal producer, and is therefore the preferred market for steam coalÐa situation which is likely to continue until such time as nuclear power is expanding at a sufficient rate to cause a significant fall in the total fossil fuel requirements for electricity generation. The main contribution of coal towards mitigating the world oil problem during the rest of this century will be the displacement of fuel oil in power stations. In circumstances in which there is a strong demand for coal for bulk combustion, coal prices will tend to reflect the price of fuel oil, but allowing sufficient advantage for coal use to be maximised. To the extent to which coal is also seeking new markets in the medium to small industrial sector, the competitive advantage will need to be greater. 26. Coal conversion to gas or liquid fuels requires a further advantage against natural gas or oil. The pricing implications in this area are very uncertain at present. It is not clear how SNG would be introduced into the British Gas system (in particular the relationship between incremental and average system cost). SNG would tend to be introduced when there was insufficient natural gas to service the `premium' markets for gas (since SNG would be less attractive than direct coal combustion as a means of meeting the `non-premium' demands). The ex-plant cost of coal-based SNG would be between 2 and 3 times the cost of the input coal. So far as coal liquefaction is concerned, rather different considerations apply. The first response of the oil industry to rising costs of crude oil is likely to be more intensive refining, to reduce the proportion of fuel oil (the product which is most readily substituted by the direct combustion of coal). At some point, however, it is likely that coal would become a preferred refinery feedstock. In spite of the recent increases in the price of crude oil, it is estimated that a further significant increase of the crude oil price relative to the price of coal is required to make coal liquefaction economic in the UK. The timescale here is uncertain, although it is important that Research & Development work on oil liquefaction is pursued. 27. Thus, for a given world oil price, there will be a wide range of `net-backs' to the coal producer depending on the market mix, both internationally and in the UK. In turn, these will determine the effective selling prices and thus the marginal production costs which will be acceptable in economic terms. 28. The foregoing emphasises the interaction between coal and oil prices and the need for investment in new deep mines and modern equipment to secure output, and the uncertainty in time of economic pressures to introduce coal liquification. This time is dependent on the rate of escalation of OPEC crudes and of coal and plant costs.
8
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
GAS 29. Whilst the availability of gas is associated with that of oil and in future may be made from coal, it tends to be traded as a separate source of energy to the individual consumer and often to the distributor. The world trade in liquified natural gas is only about 5% of that of crude oil and therefore small in its influence on price. Most natural gas is pipelined and used indigenously, though traffic across national boundaries is increasing. 30. The following tables, based on BP Statistical Review 1979 and papers by Peebles (of Shell International Gas Limited) to the LNG-6 Conference in April 1980 illustrate the relative levels of oil and gas traded in 1979 and estimates of consumption for 1980. World natural gas consumption and oil trade 1979 International oil trade 106 tonnes
World natural gas consumption TCF
% of total
106 t.o.e.
Crude
Products
Indigenous Internationally traded a) Pipeline b) LNG Total
42
87.5
1008
±
±
4.6 1.4 48
9.5 3 100
228 72 1152
± ± 1495
± ± 257 1752
Estimated World natural gas consumption 1990 Indigenous Internationally traded a) Pipeline b) LNG Total *Base case.
TCF 52.7
106 t.o.e. 1265
% of total 81
% of 1979 oil traded 72
7.3 11 175 10 5* 8 120 7 65 100 1560 89 Low case estimate: 3 TCF High case estimate: 7 TCF Assuming that there are no major changes in the pattern of oil imports and exports in the next 10 years (of the 1495 10 6 tonnes crude oil moved in 1979, 1248 10 6 was from the Middle East and Africa, and could reasonably be expected to continue as the dominant oil trading factor through to 1990), the quantity of LNG in international trade is only likely to increase in the next 10 years from 4% to 7% of the oil traded, i.e. to remain a relatively small factor in the world energy balance.
31. There has been a tendency for gas producers to attempt to price up to crude. OPEC has been working on the problem for four years, with the view that export prices should be ªin lineº with crude oil on a thermal content basis. The high cost of transport whether by tanker or pipeline is an important factorÐas shown by further information from t he LNG-6 Conference. Source—‘Energy Alternatives to LNG’, by E.J.Daniels, K.C.Darrow, H.R.Linden, Paper 4, Session 1 of LNG-6th April 1980. Distance (nautical miles)
Terminal capacity
Cost p/th
$ Btu×106 (2.2 $=£1)
1000 4000 5000
1000mcfd 500 mcfd 500 mcfd
2.7 5.45 6.36
.59 1.2 1.4
Estimated pipeline transport costs Alaska Gas SourceÐas above Distance
Capacity
Cost p/th
$ Btu×106
4800 2500 mcfd 13.1 2.88 Prices rose rapidly early in 1980 into the range of $3 to 5 per million Btus. ($1 per million Btu=$5±6 barrel), and the trend was against expansion of LNG trade where freighting costs for projected supplies to N.Europe from Algeria are in the range of $2.20±$2.30.
THE WATT COMMITTEE ON ENERGY
9
The gas trade is characterised by relatively few outlets and inlets and therefore unlikely to have the type of free or ªspotº market characteristic of oil. Prices will therefore be determined by the bargaining power of suppliers and distributors. Suppliers negotiating for parity with crude oil are experiencing resistance from gas utilities who are potential purchasers. 32. UK is largely self-sufficient for natural gas, though some is obtained from the Norwegian sector of the North Sea and a small supply of LNG is imported from Algeria. Oil based SNG is used for topping up. The published estimates of gas available to the UK from its own North Sea fields in 1979 in the form of proven, probable and possible reserves totalled 53.4 trillion cubic feet (tcf). The ultimate gas reserves may turn out to be higher since primary exploration is comparatively recent and the estimates make no allowance for still further discoveries. Premium gas sales are considered likely to rise to around 5,000 mcfd by the late 1980s and gas sales could probably be sustained through to the end of the century at the level expected by the early 80s. By then, natural gas will need to be supplemented from other sources, though the exact timing remains uncertain. The following paragraphs discuss the development of the gas industry through the 1950s to 1960s, its present cost structure and the medium and long term trends. 33. In 1949 when the Gas Industry was nationalised, it was in the hands of over 1100 independent undertakings. The gas sold was almost entirely manufactured from coal; there was little interconnection between individual piped systems; and there was keen competition with the other fuel industries in the industrial and domestic markets. At that time, domestic gas was primarily used for cooking although there was some space heating using the old radiant fires, and some gas was still used for lighting. By the late 1950s, there had been some rationalisation of supply, with production being concentrated at the larger and more efficient works, but there was no growth in sales and the cost of good quality coal for carbonisation was rising more rapidly than that of the coal for steam raising used for electricity generation. 34. To meet this challenge, the Industry decided to pursue several courses of action. 1) It became very involved in the exploration for natural gas, both onshore and offshore. This work, in conjunction with the relevant oil companies, led to the discovery and availability of natural gas and oil in the United Kingdom. 2) A full scale study of sea transportation of liquefied natural gas (LNG) indicated that this source of supply was both technically and economically acceptable. The first commercial scheme for importing LNG from Algeria to Canvey Island is still in operation and was the forerunner of what is now a large-scale, worldwide trade. The existence of the 1000 Ibf/ in2 pipeline which carried the Algerian methane to eight of the Gas Boards allowed North Sea natural gas to be speedily brought into service, when it was discovered in 1965. 3) A research programme into the gasification of oil products was instigated. This quickly led to the adoption of the ICI steam/naphtha reforming process which was used so beneficially during the period between the coal carbonisation and the natural gas eras. The research programme led to the development of oil based substitute natural gas (SNG) processes and the construction in America of several of these under license from British Gas. A further programme of research into SNG processes based on both oil and coal is currently being undertaken by British Gas. 4) The redesign of gas spaceheaters to produce high efficiency and attractive units encouraged customers to change from coal and this was the first step towards the growth of widespread domestic central heating in this country. 5) At the same time the pricing policy for domestic gas was changed to one which gave the customer some incentive to change from solid fuel. 35. In 1967 the decision was taken to convert all customers' appliances to burn natural gas directly (rather than reforming it to towns gas). During the previous seven years the Gas Industry had reorganised its affairs sufficiently that it could embark on this huge task with confidence of success. In view of the large financial commitments to the oil industry in respect of gas purchase contracts, the marketing policy was directed towards obtaining as much ªPremiumº demand as possible, with sales to non-premium markets limited to what was needed to give operational flexibility. (ªPremiumº markets are those in which gas has special value compared with other fuels because of its cleanliness and controllability). Government price restraint has been an important factor. This past restraint is in marked contrast to the current Government requirement that domestic tariffs should be increased by 10% in real terms during each of the next three years. On the other hand, in the non-domestic market, prices have been set relative to the price of competing fuels. In the early days of natural gas, the Marketing strategy was geared to the task of selling the gas that was available most advantageously. Now, however, there may be a little more control over the rate at which gas needs to be accepted, and consequently the rate of growth in demand can more readily be related to the requirements of the preferred markets. 36. The following tables illustrate the relative size of the major cost categories at present, on a pence per therm basis and changes which have occurred.
10
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
Cost for the gas business in 1979/80 (pence per therm) Prime materials Other trading costs (net) Capital charges Total (net of non-gas income)
5.8 6.4 2.9 15.1
The above table is derived from the Performance Ratios in the 1979/80 Accounts. Non-gas income has been subtracted from total trading costs (excluding prime materials) to give `net' other trading costs. Therefore, the costs are for the gas business onlyÐthey only include the net costs of customer service and appliance marketing. On this basis, the largest cost category is `other trading costs' which covers both labour costs and the costs of materials and services. The cost of prime materialsÐmainly natural gasÐis nearly as important as other trading costs, whereas capital charges account for a much smaller proportion of costs. The following tables highlight changes in costs in real terms since 1969/70 and also show changes in the cost structure over this period. Changes in costs—pence per therm—(1979/80 prices) Prime materials Other trading costs (net) Depreciation and Amounts written off Interest Total costs
1969/70
1973/4
1977/8
1979/80
10.2 12.5 4.6 6.0 33.3
4.3 7.4 4.3 3.5 19.5
3.9 7.0 4.9 1.1 16.9
5.8 6.4 2.7 0.2 15.1
1969/70
1973/4
1977/8
1979/8O
30.7 37.6 13.9 17.8 100.0
21.9 38.0 22.2 17.9 100.0
23.1 41.6 28.7 6.6 100.0
38.3 42.3 17.8 1.6 100.0
Cost categories as a percentage of total costs Prime materials Other trading costs (net) Depreciation and Amounts written off Interest Total
The figures shown in the last two tables must be interpreted in the context of the major transition in the nature of the gas industry involved in the switch from manufacturing gas to the transmission and distribution of natural gas as shown in the following table. Also, the change to replacement cost accounting in 1976/77 should be noted. Gas availability (million therms) Town Gas Natural Gas for direct supply Total gas available
1969/70
1973/4
1977/8
1979/80
5010 740 5750
2170 10240 12410
20 15820 15840
10 17360 17370
37. Clearly total costs per therm have fallen substantially in real terms. Both the cost of prime materials and other revenue costs have almost halved since 1969/70, while there has been an even larger reduction in interest charges. The relative proportion of prime material costs fell between 1969/70 and 1976/7 mainly due to the replacement of manufactured gas by natural gas. However, there have been large increases in prime materials costs over the past three years. This has mainly reflected the impact of the coming on stream of more expensive northern North Sea gas supplies. These supplies, received at the St. Fergus terminal in Scotland, are considerably more expensive than those from the southern North Sea. The reduction in `other trading' costs per therm has been very much influenced by the increase in sales over the past 10 years. British Gas had benefited from economies of scale as the industry expanded and this had provided important advantages in terms of reduced unit costs. This is partly because a large element of the `other trading' costs are related to numbers of customers
THE WATT COMMITTEE ON ENERGY
11
rather than to throughputÐthe number of customers has only increased by 14% over the past 10 years whereas sales have more than trebled. As mentioned previously, in real terms total capital charges on a pence per therm basis have fallen very sharply over the last 10 years. This fall is largely due to the significant reduction in interest charges reflecting the general improvement in the industry's financial position and in recent years the extent to which the industry has been able to pay off debt. In real terms there has also been a fall in depreciation and amount written off (on a pence per therm basis) over the past 10 years. This is partly because the expansion of the industry's asset base (in real terms) has not matched the significant expansion in gas sales, and partly to the ending of the write off of conversion expenditure and displaced plant. In total, capital charges now only make up 19% of total costsÐcompa red to nearly 32% 10 years ago. The Medium Term 38. In the medium term the main change in the cost structure is likely to be the increase in prime material cost in real terms and as a proportion of total costs. This will reflect two factors. The cost of gas purchased will rise substantially as the proportion of gas taken from the northern North Sea increasesÐthe volume of more expensive Northern Basin supplies will rise relative to the cheaper Southern Basin supplies. Secondly, gas purchase prices from the Northern Basin are indexed to external factors which are expected to push up costs in real terms over the next five years. Moreover, the recently introduced Government levy on natural gas purchases by British Gas will have a significant effect on future prices. Trends in other trading costs in real terms seem likely to reflect general economic development and the continued spreading of `fixed' costs over a larger volume of output. However, the growth in gas sales over the next five years will be much smaller than during the 1970s and only a small fall in other trading costs per therm is likely. The next few years will see a significant increase in investmentÐma inly due to the need to increase future peak supplies. As a result depreciation charges per therm are expected to remain fairly constant in real terms. The Longer Term Gas Costs 39. a. Gas costs
Assuming a continuation of the policy of giving priority to premium markets, premium gas sales are likely to rise to around an average of 4000 mcfd in the early/mid 1980s. Some further expansion to reach a plateau of around 5000 mcfd of premium sales by the late 1980s and into the next century is expected. Such sales would be in addition to interruptible sales used for load balancing and other operational purposes. At some time near the turn of the century, natural gas will need to be supplemented and ultimately replaced from alternative sources. The exact timing will depend on a number of factors including growth of markets, whether there are new gas discoveries, size of economically recoverable reserves, the extent of energy conservation and prices relative to other fuels. A convenient starting point is provided by the so-called `Brown Book'Ðthe Department of Energy's report on `Development of Oil and Gas Resources in the UK 1980'. This estimates proven gas reserves at 26.6 trillion cubic feet (tcf), probable reserves at 11.4 tcf, and possible reserves at 15.4 tcfÐa total of 53.4 tcf remaining at December 1979. These figures exclude natural gas already used, as well as the reserves already contracted to British Gas in the Norwegian part of the Frigg field. (It is in fact quite possible that ultimate gas reserves will turn out to be higherÐsince the primary exploration phase around our coasts is comparatively recent and the estimates make no allowance for discoveries yet to be made). 40. At the present time, it appears that the options which may be available to supplement natural gas supplies are imported LNG, new sources of pipeline gas or manufactured SNG from either oil or coal feedstocks. The present policy is to avoid closing options now, in order to retain the flexibility necessary to adapt to changing circumstances and to ensure that the industry does not become overdependent on any single source of supply. The eventual choice will be made on the basis of relative costs subject to other considerations such as security of supply and reliability. 41. SNG Technology The Gas Industry first became interested in high pressure gasification processes in the late 1950s when a Shell process for partial oxidation of oil was installed at the Isle of Grain, and when Lurgi processes for partial oxidation of coal were being planned for Coleshill in West Midlands and for Westfield in Scotland. Delays in obtaining planning consent delayed the Lurgi plant construction until 1964 and 1962 respectively, when the steam/naphtha catalytic reforming process was available and was economically more attractive. The latter process was first introduced on a large scale by ICI and produced gas at up to *350 Ibf/in2 but with a calorific value in the order of 300 Btu/ft3. The gas industry initially used LPG or imported Algerian
12
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
methane to enrich the lean gas to 500 Btu/ft3, but later used one of its own naphtha-based processes which were more economic. 42. Catalytic Processes The basis of the Gas Council Catalytic Rich Gas (CRG) process is to pass highly desulphurised naphtha and steam through a metal based catalyst at a comparatively low temperature, and after shift conversion and carbon dioxide removal, gas with a CV between 800±850 Btu/ft3 is produced. By further processing, it is possible to produce fully inter-changeable SNG, and 14 units have been installed in the USA with an aggregate annual production of over 4500 million therms. The process is energy efficient (in the order of 90%) and the plant is inexpensive compared with other SNG processes, but at present it can only be used to gasify light feedstocks such as naphtha. Further experimental work on this process is aimed at extending the range of oils that can be used. 43. Hydrogenation Process Work on hydrogenation of heavy oils using fluidised beds of oil coke for temperature control led to the development of the much simpler ªRecycle Hydrogenatorº based on naphtha feedstock. After condensation and carbon dioxide removal, gas was produced with a CV of about 1000 Btu/ft3 and this was used to enrich lean ICI gas. The previous experimental work on hydrogenation of oils is now being resumed as this is the preferred method of converting any heavy and residual oils that may be available into SNG. 44. Gasification of Coal The Lurgi plant at Westfield, Fife, was shut down in 1975 when the conversion to natural gas in Scotland was nearing completion. Since then, the plant has been used as a development centre. Initially the research programme investigated the ability of the process to handle American coal in a joint venture with several American companies. Progress is now being made towards achieving higher efficiencies from the plant now modified to become the ªBritish Gas/Lurgi slagging gasifier.º This improvement will be achieved primarily by reducing the ash, to the molten state as slag. A further aim is to develop the process so that it can be operated on run of mine coal, with a high proportion of ªfi nesº. While SNG has been produced at Westfield and has been distributed to customers with no adverse effects, further development work is desirable to improve the basic process, to supplement our knowledge of the ancillary processes and to decide the best way of dealing with effluents and similar problems. At present it is thought that it could take 10 years to build a plant of this sort from the time the decision was taken up to operation. As more experience is gained, the lead time might be reduced to perhaps five or six years. The tables which follow in para. 47 may be taken to give an indication of the relative capital and running costs of the important SNG processes. 45. How may these Processes be Used? Coal gasification and heavy oil hydrogenation processes are complicated and in the case of coal plants, the feedstock supply will require an extensive bulk handling system. For these reasons, plants of this type will be more suited to base load operation than to following the daily and seasonal demands of gas customers. Storage of gas, perhaps in depleted gas wells, may solve the problem of demand variation, but another alternative would be to install the simpler more flexible catalytic processes and to operate these at times of increased demand. The two alternative ways of meeting seasonal demand variation would lead to two separate strategies during the long period of rundown of the existing gas fields. Either base load plant could be installed and be operated in conjunction with storage and modulation of the remaining gas supplies, or peak load plant could be installed first, to supplement the remaining natural gas at peak periods. It is because these issues cannot be decided now that British Gas is continuing its development work into SNG processes. SNG is not likely to be needed within the immediate future so the decision should not be taken until there is a more definite need, and a clearer view of the economic factors can be obtained. 46. b. Non-Gas costs
*Note: This opened up the prospect of high pressure transmission
THE WATT COMMITTEE ON ENERGY
13
The two other main components of costs of SNG are the costs of feedstocks and of storage, transmission and distribution (STD). Clearly the cost of coal feedstocks will be affected by the real increase in miners' wages and productivity, environmental considerations and market factors. The most significant feature of STD costs is that, given the existence of the present system, the relevant costs are likely to be largely of a maintenance nature. Characteristics of some SNG Processes 47. The following tables show the estimated relative costs of producing SNG using four processes of particular interest to the Gas Industry. 48. The feedstock costs used in constructing the table are as follows: Naphtha Light Crude Oil Residual Oil Coal
£88/tonne £59/tonne £47/tonne £23/tonne
The costs are based on spring 1979 feedstock prices and have been calculated on a range of load factors. In relation to the quoted costs of coal gasification using fixed bed gasifiers, it has been reported* that the overall thermal efficiency of the British Gas/Lurgi slagging gasifier is over 70%. This will result in worthwhile improvements and cost reductions compared with the figures quoted above. It should be noted that all the relative costs have been calculated as variations from the cost of producing CRG and Hydrogasification at a load factor of 90%. Excluding feedstock costs Process Route
Load factor %
CRG and Hydrogasification Hydrocracking Fluidized Bed Hydrogenation Fixed Bed Coal Gasifier
90
80
70
60
100 140 210 760
110 150 230 850
120 170 260 950
130 190 300 1090
Including feedstock costs Process Route CRG Hydrogasification Hydrocracking Fluidized Bed Hydrogenation Fixed Bed Coal Gasifier
Load factor % Feedstock
90
80
70
60
Naphtha Light Crude Residual Oil Coal
1110 930 910 1520
1120 940 930 1600
1130 960 970 1710
1140 980 990 1850
49. The likely plant construction period required for the above processes are in the order of: CRG and Hydrogasification Hydrocracking Fluidized Bed Hydrogenation Fixed Bed Coal Gasifier
3±4 years 3±4 years 3±4 years 5±7 years
The shorter period represents the actual construction time which may be needed, from obtaining access to the site with a planning consent, to plant commissioning. The longer periods quoted make an allowance for the time that will be needed for the plant to be brought to a satisfactory state of reliability. In addition, it is likely to take between one and three years to obtain planning consent for a new SNG plant. Thus, the time taken to achieve reliable output from a coal-based plant from the time of taking the first decision would be 8±10 years.
*IGE Communication 1112-C. Timmins 27th Nov. 1979. “The Future Role of Gasification Processes”.
14
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
URANIUM 50. The energy available from uranium is exploitable almost entirely as electricity, and its use is therefore closely identified with Electricity Supply Industries (utilities). There is no immediately foreseeable commercial exploitation as an industrial heat source, though studies relating to steel manufacture and experiments concerned with district heating have been carried out. Use in propulsion is confined almost entirely to naval applications. 51. Uranium is in competition, therefore, with coal, oil and gas as a basic fuel for electricity generation, but with the unique position that its energy can only be made available to the end user as electricity. Therefore, it has no alternative substantial route to the energy market other than that of the Electricity Utilities. World requirements for uranium are therefore identified with world requirements for generation of electricity. They are sensitive to the demand for electricity and to relative costs of generation of electricity from fossil fuels. 52. There are special features associated with the use of uranium for generation of electricity which influence public opinion as to its acceptability, such as risk to the public from radiation and prospect of military proliferation. These features expose the use of uranium to political opposition at every level of the process from mining through operation of electricity generation stations to the final disposal of waste products. Even the choice of technical design of nuclear reactors becomes a matter for political action. However, current indications are that nuclear fission thermal reactor power stations are able to provide more economical electrical energy than any fossil fuels, even in areas of abundant cheaply mined coal, provided that comparable criteria of environmental acceptability are applied. Yet the world currently has substantially under committed nuclear power plant industries and uranium prices are falling. Only France and Japan have substantial commitments to nuclear power, each with the object of reducing dependence on fossil fuels and particularly oil. 53. The most recently published international forecast of nuclear power requirements was made by the International Fuel Cycle Evaluation Programme and issued early in 1980. For the year 2025, INFCE gives a range of installed nuclear capacity from 1800 to 3800 GW (e) for the world, excluding communist countries. For the year 2000, the range is 850 to 1200 GW (e). The 5th Symposium of the Uranium Institute in September 1980 concluded that even the low forecast for year 2000 will prove to be high. The `low' forecast leads to an estimate for 2025 of 6/4m. Te of ore used and 9m. Te committed and these may be regarded as `ball park' figures. They do not allow for any potential improvements leading to lower specific consumption. 54. In the face of low demand, uranium ore prices have dropped from around $43 per Ib. in mid 1979 to $28 in early 1981. Stockpiles held partly by the Utilities and Governments are difficult to assess, but are thought to fall in the bracket of 150,000 to 200,000 tons of uranium oxide. This is associated with a probable over-production margin of about 40%. There is unlikely to be a significant world change in demand in the next decade because of the lack of orders for new power plantsÐa revival of ordering now would only influence demand a decade later because of the substantial lead times involved. 55. The characteristics of nuclear plant lead to their operation at the highest load factor the plant can achieve-for estimating purposes, usually taken as 70% and whilst this is not universally achieved, plant availability is improving. The reported generating capacity of the world, excluding the Eastern block at December 1980, consisted of 180 installations, including 190 reactors in 18 countries totalling about 120,000 MW (e) of generating capacity. 56. Using current uranium utilisation figures appropriate to the PWR/BWR and AGR families of reactors using enriched uranium fuel, the total uranium demand for 25 years station life at 70% load factor is 3,000 Te for 1 GW (e). However, the demand can be adjusted significantly by choice of the level of enrichment and therefore the percentage of U235 left in the depleted uranium after enrichment processing. The table shows requirements for 1 GW (e) for 25 years at 70% load factor. Tail assay % U235 AGR Initial charge Replacement loaded Replacement discharged Final charge Total PWR Initial charge Replacement loaded Replacement discharged Final charge
0.2
0.25
0.3
472 3,724 1,125 465 2,606
505 4,030 1,151 495 2,889
546 4,411 1,184 523 3,240
373 3,578 775 218
405 3,900 793 233
444 4,300 815 252
THE WATT COMMITTEE ON ENERGY
Total
2,958
3,279
15
3,677
There is scope for combining a decrease of concentration in tails assays with increased fuel irradiation to reduce consumption by about one third to offset future increases in ore price. Operating practice appears to have developed mainly on the basis of 0.3% tailings, but future scenarios (ETSU Benchmark) tend towards 0.2%. The economic choice will depend on enrichment tariffs and the ultimate effect on these of cost benefits from centrifuge separation plants. 57. The cost of electricity generated by nuclear plant is relatively insensitive to basic price of uranium. Calculations of generating cost have been done for some typical cases. Principal assumptions are:± Constant money value 5% discount rate 70% load factor 0.2% in isotope separation plant tails assays On these assumptions, the increase in generating cost for every $1 per Ib. increase in the price of uranium is £3/kW. Generating costs ex fuel would be £1,000 to £1,200/kW and an increase of $100 per Ib. more in price would increase electricity costs by about 25%. Attention is therefore directed towards the capital cost of nuclear installations which are discussed later. 58. Uranium is widely distributed in the world in low concentrations unlikely to be of economic significance. Concentrations of uranium to a grade and in quantities likely to be commercially viable are in deposits mainly confined to Precambrien rocks or to younger rocks immediately overlying the basement. Dr S.M.Bowie reviews World and UK Nuclear Fuels for The Watt Committee Report No. 9. Nearly 90% of reserves occur in seven countriesÐAustralia, Brazil, Canada, Niger, Namibia, South Africa and USA. Further prospects exist in Africa, Australia, Greenland and South and Central America. Estimates of probable uranium requirements have been repeatedly revised downwards since 1975 and this has alleviated the risk of shortage before the end of the century. However, reserves and estimated additional resources at a cost not greater than $80 kg U are 3,300 million tonnes and are inadequate to sustain revised requirements beyond about 1995. Estimated further reasonably assured and estimated additional supplies if the cost increases from $80 to $130 per kg U, amount to 1,700 million tonnes. However, prices would have to include margins well in excess of extraction costs to create incentive for development. 59. Bowie suggests probable world (other than Eastern bloc) availability of uranium in round figures as the following table:± Probable uranium availability, tU×10/year Australia Brazil Canada France Gabon Niger S. Africa and Namibia USA Others Totals (rounded)
1979
1985
1990
2000
0.6 0.1 6.9 2.2 1.0 3.3 8.9 14.8 0.6 38.4
10 0.5 12 4 1 5 11 20 5 70
15 1 13 4 1 10 14 30 6 95
20 1 12 4 1 10 15 25 14 100
60. Whilst the spot market position provides current low prices, the risk is now of inadequate investment to foster further exploration and production capacity, especially in the face of adverse public opinion influencing all aspects of the nuclear fuel cycle. An expansion of nuclear power based on a demand growth of as little as 1.5% p.a. would be likely to lead to demand exceeding probable production availability, unless pressure of demand leads to further exploration and discovery, or alternatively, improvement in fuel utilisation by the introduction of Fast Breeder Reactors. Figure 2 is attributed to Dr Bowie and illustrates the range of speculation in demand and production capacity. 61. It seems very likely that the cost of electricity from nuclear power will be relatively unaffected by movements of uranium price as influenced by world oil prices, though the latter will have an indirect effect on construction costs.
16
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
Figure 2 Comparison of uranium supply and demand
However, the demand for electrical energy will be influenced by its competitiveness with other fuels at the point of consumption and especially that of natural gas. Forecasting by the CEGB in its low projection scenarios of 1980 to the year 2000 sees only a marginal increase in the total market share of energy suppliedÐfrom 13% to 14% due to industrial uptake. ELECTRICITY 62. Electricity is a medium for distribution of energy derived from a range of primary sources whose price, availability, and public acceptability are subject to change. The electricity system of the UK developed relatively rapidly after the war period, and for the first decade it was not possible to install plant sufficiently rapidly to restore a satisfactory standard of security of supply. Construction was concentrated on units of 30 to 60 MW. By 1952, the industry had been nationalised and development of a 275 kV supergrid transmission system had begun; the latter permitting connection of larger generating coal fired plant was offered, culminating in a concentration on 500 MW units for several years from 1960 onwards. plant. In the late 1950s, plant of a variety of unit sizes, all for The first two commercial nuclear power stations at Berkley and Bradwell totalling 520 MW (e) were commissioned in 1962. 63. In 1963, the medium term estimates of growth of peak demand were about 9% p.a., corresponding with a peak ordering rate for new plant of 6000 MW p.a. for two successive years. In 1965, the first AGR was ordered with two units of 660 MWÐ with unit size equating to those ordered for fossil fuel fired plant. Since then, and recognising that there may have been an element of overestimating in 1963, electricity growth has been far smaller than expected-the 1966 recession and 1967 devaluation crumbling the prospect of sustained economic growth. Also, the discovery and exploitation of natural gas from the southern North Sea fields completely altered the energy supply picture. The oil crisis of 1973 marked further depression in demand for electricity and effectively removed any incentive to use oil as a fuel for generation of electricity. In the last four years, growth of demand has been about 1%.
THE WATT COMMITTEE ON ENERGY
17
64. The electricity supply industry delivers some 94% of all electricity generated; the balance coming mainly from plants integral to the steel, chemical, and paper industries. In 1979/80, the analysis of the plant used by the CEGB (to which an addition of 12% will take account of Scotland and Northern Ireland) was:±
Coal (incl. dual coal/oil and dual coal/gas Gas Oil Nuclear Others
Plant (GW) (as at 31.3.80)
Fuel consumption (mtce)
40.5 ± 8.8 4.4 3.3 57.0
80.6 0.9 12.3 10.9 ± 104.7
And the structure of the average price to the consumer in p/kWh (data from CEGB and EC 1979/80 Annual Reports)Fuel Depreciation and interest Balance*
1.343 0.352 0.444
(48%) (12.5%)= (15.5%)
2.139+ (76%)
0.665= (2.4%)
2.804
*Covers other costs and difference between cost and price
65. The latest published forecast of demand in England and Wales was prepared in October 1980 and was for an unrestricted peak demand of 49.0 GW for 1987/8. To this peak demand the Electricity Industry adds a planning margin of 28%, after having allowed for the value of certain external supplies, in order to determine the plant capacity required in the planning year. 66. The CEGB has some 13.4 GW of plant under construction (as at 31st March 1980). Coupled with the very low current growth of demand, this would tend to give a surplus of generating plant on the system but this will be dealt with in due course by the removal from active service of plant surplus to requirements. The total fuel burn on the system will grow only slowly from the figures given in paragraph 64. The CEGB's current plans for new generation capacity comprise one new AGR station at Heysham II for which orders are currently being placed and one new PWR station which is proposed for Sizewell B. These stations are proposed as part of the CEGB's thermal nuclear reactor strategy, and not directly on grounds of demand growth or economics (though these stations are in fact expected to prove economic). The prime purpose of the strategy is to lay a sound basis for the later development of nuclear power as and when required. In addition, the SSEB are building the Torness AGR station. This mix of plant would be associated with a nuclear fuel consumption doubling to about 20 mtce and a fossil fuel requirement of 95 mtce. Coal use is expected to remain at above 75M. tonnes since, although considerable flexibility to use oil exists, it is most likely that coal use will be maximised. 67. With the upward trend in oil price, there is incentive to limit the use of oil to peaking plant for which gas turbines are the most convenient form, though for some time modern oil fired steam plant which, after all, forms part of the operating margin, may operate at lower load factors and still not warrant replacement or conversion to solid fuel. The most sensitive area of debate will concern the relative economics of coal burning fossil fuel and nuclear power plants, each of which have a range of uncertainties. The working group sees an increasing interaction between prices of oil and coal over the next 20 years while the price of North Sea oil is determined by world trade. Coal will be the dominant source of energy for industrial heat and power and because of long lead time, only graduallysupplement by conversion, some of the market for natural gas and oil products. Nuclear plants are much less sensitive to availability of the basic energy source uranium and there is no competing demand to that required for electricity. On the other hand, capital cost is a dominant feature which has considerable uncertainty, especially in the light of enhancing requirements for environmental and operational safety. 68. The cost of electricity generated from nuclear power, being dominated by the capital cost of the power stations, is therefore sensitive to the effect of the volume of trade in power plant on prices and to the performance of the plant in terms of availability. Five main thermal reactor types have been developed and exploited outside the Eastern bloc countries, three of these being liquid cooled and two gas cooled. It is the latter on which the UK has hitherto concentrated its investment. About one half of western world nuclear generation is, however, vested in one single class of liquid cooled reactorÐthe Pressurised Water Reactor.
18
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
Performance of nuclear power stations—September 1980 Reactor type
Per cent of total
Capacity (MWe)
PWR 52.2 62,283 BWR 31.5 37,052 Magnox 7.1 8,516 PHWR (Canada) 5.3 6,304 *AGR (UK) 2.2 2,040 *An additional 3000 MW will be commissioned 1981/82.
Reactor years experience
Average annual load factor (%)
464 222 381 73 1st year of operation
55.9 58.6 57.9 74.4 ±
69. In view of the extent of penetration of PWR worldwide, especially USA, Japan, France and other parts of Europe, excepting UK, it is not surprising that it has tended to be adopted as the lead competitor in the market against which the potential of other systems must be assessed. This is not an easy exercise since prices in one country of installation can vary very considerably from those of another, influenced by national safety requirements, manufacturing capacity, labour productivity and the size of infrastructure needed to support the construction and operation of the reactor type. This problem of comparison is not limited to nuclear plantÐbeing common to most novel processes when transferred across national boundaries into different trading situations. Ultimately, firm comparisons can only come from actual experience of building in the country purchasing the plant and then only if the requirements lead on to repetitive investment. In default, therefore, it is necessary to make judgements on the basis of best estimates, since with the very long lead timesÐ 7 to 10 years involved in nuclear power stationsÐthe true cost will only be established after completion. 70. The most recently published assessment of capital costs of new nuclear power plant in the UK was that made in the report of the National Nuclear Corporation to the Secretary of State for Energy in 1977 in which the important comparison was that between the AGR as UK's most advanced concept of Gas Cooled Reactor and the Pressurised Water Reactor, both brought to design criteria expected to meet UK Nuclear Safety requirements. At January 1977 money values, capital and financing costs for the power stations in an ongoing construction programme of 4 stations at about yearly intervals were estimated as:± AGR of 2 613.5 MW(e) nett. PWR of 1 1095 MW(e) nett.
£515±£597 per kW £421±£502 º º
and the fuel cycle and other operating costs:± AGR PWR
£351±£379 per kW £329±£353 º º
These were estimated to yield generating costs of:± AGR PWR
1.48±1.67 p/kWh 1.28±1.46 p/kWh
In these estimates, uranium costs account for less than half of the fuel service costs and were foreseen to rise to $40/lb. at 1977 prices by the end of the station life. 71. The estimates have been the most comprehensive available following a long gap in ordering of nuclear power stations in the UK. Firmer estimates based on actual UK contracts are not yet available though ordering for two AGRs for Heysham II and Torness is substantially underway and proposals for a UK PWR for Sizewell B have yet to face public inquiry. The CEGB in its evidence to the Select Committee of the House of Commons introduced a ªmiddleº figure of £1,000 per kW as a reasonable estimate of the cost of Heysham II at 1980 prices, associated with a generating cost of 2.22 p/kWh. It is understood from informal conversation that this estimate includes an element of post contract allowance derived from the construction experience of Gas Cooled Reactors not included in the 1977 Thermal Reactor Assessment, as well as an adjustment of the estimated cost to take account of the extra charges arising from the limited commitment to two power stations, Heysham II and Torness, instead of the ongoing programme on which the TRA was based. It is also understood that the value of orders so far placed for the two AGRs supports the adjusted TRA cost and the middle figure of £1,000 per kW. 72. The PWR design has required consideration both in the context of UK safety requirements and the consequences of the Three Mile Island in the USA and design work is currently well advanced particularly relating to:
THE WATT COMMITTEE ON ENERGY
19
Reducing operator radiation exposure nearer to UK Gas Cooled Reactor experience Dual containment Essential support systems included in containment From informal conversation, it appears that even with these developments, the difference in cost favouring the PWR over the AGR will remain in the region of 15 to 20% for ongoing repetitive installation. This difference will be less if there is only one single installation and especially if this is associated with the use of two turbine generators which would, according to the TRA, add 2% to 3% to generating cost. 73. An important factor in the differential cost of the PWR and AGR arises because the two AGRs currently ordered in the UK may well be the ªend of the lineº of production, whereas the PWR is supported by expectation of the supply of components from a buyers' world wide market. Having regard to this world over-capacity to produce some major PWR components, there will be little early incentive for the UK to establish manufacturing capacity. However, whilst competition from overseas makers of components is likely to be keen, the proportion of the total cost bought outside the UK remains relatively smallÐless than 10%. Even with this competition for components from overseas suppliers, the first single PWR will have to bear costs higher than estimated in the TRA for a ªfamilyº of four stations and the advantage over the currently judged cost of new AGRs might therefore be reduced to about 10%. This gap will widen if the PWR design is replicatedÐthe indication being that at constant price levels, the fourth of a series of near identical plants will be built for 75%±80% of the cost of the first. 74. The comparative cost of electricity generated in the UK from uranium and from coal is affected not only by the cost of the basic fuels, but by the cost of reprocessing of nuclear fuel to allow extraction of plutonium, separation of depleted uranium, disposal of active waste and concentration and storage of highly radioactive waste. Judgement has therefore to be exercised as to possible trends over 30 or more years in deciding not only on investment in a power station but over longer periods in the development of coal or uranium exploration, mining and transportation, and for nuclear plants in transportation and reprocessing of irradiated fuel and wastes. This judgement is made complicated by important factors:± 1) The investment in mining, fabrication and reprocessing of nuclear fuels are commercial interests separate from that of the generation and supply of electricity, just as investment in coal production is a separate commercial activity. 2) The levels of investment to achieve economic fuel production, whether fossil or nuclear, and nuclear waste disposal are not capable of matching in a simple way the incremental steps of power generating capacity to meet load growth of the electricity system and replace obsolescent plant. 3) The real availability of the power station plant in terms of shut down needs for maintenance. 75. Uranium prices are continuing to fall in the face of a world recession in demand for nuclear power and in 1981 are below 830 per Ib. U.Nevertheless, coal continues to rise in price associated in part with inflationary pressures, to some degree restrained by competition in world trade. The nuclear fuel cycle is a state monopoly operation in the UK. There are a number of stages in the fuel cycle and the services for these stages are available in a number of other countries and are traded internationally, resulting in a generally consistent level of pricing for each of these services. Only in the case of the enrichment process are the technologies sufficiently different to show major differences between the capital and running charges, and in the energy consumed in the process. The UK process has the least energy consumption, and since the plant is of modular form it will experience the benefits of technology improvements more than those of scale. For enriched uranium fuel cycles with reprocessing of irradiated fuel an approximate apportionment of cost at present day prices, including supporting operations, is:± Uranium ore Enrichment Fabrication Reprocessing and vitrification (net)
20% 20% 20% 40%
The trends in prices of uranium and of coal favour generation of electricity by nuclear plant and at today's prices and estimated plant costs, the price of uranium could increase ten fold before generation costs match those of coal. Escalation in the cost of coal would increase the differential in favour of nuclear power. However, the nuclear fuel cycle, which takes no credit for plutonium contained in waste fuel, has less cost flexibility without impairing overall competitiveness. It has to contend with practical safety problems and adverse public opinion in
20
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
dealing with waste movement, treatment and disposal which can have dominant effects on cost and to which the overall competitiveness will be sensitive. Alternatives such as dry storage of spent fuel at the power station site are being investigated. 76. The principal incentive for development of Fast Fission Reactors has been the prospect of breeding new fissile material and improving the utilisation of the uranium fuel in a combined cycle involving Thermal and Fast Reactors. Thermal Reactors are already creating a stock of plutonium capable of starting the fuel cycle of a Fast Reactor system. Both the UK and France have undertaken development of Fast Reactors with breeding capacity and each has experience of both the reactors and the fuel cycle. France is most heavily committed. UK experience is being accumulated from operation of the 250 MW (e) Dounreay Fast Reactor and its fuel processing plant, and French experience is identified with their Phenix. Both reactors are sodium cooled. 77. The world recession in demand for electrical energy and the political opposition in some quarters to nuclear power has led to a much lower world demand for uranium than previously predicted and a falling of uranium prices. *Nevertheless, electricity derived from thermal reactors still retains a cost advantage over fossil fuels sufficiently great to stimulate further investment in nuclear power should world economy revive even modestly. Those concerned with availability of uranium continue to recommend exploration since any sharp expansion in demand could result in relatively rapid depletion of reserves. The introduction of breeding reactors would, in the longer term, relieve demand for uranium completely for many centuries. However, the incentive for the very large investment which would be needed for prototype full scale Fast Breeder Reactors and their fuel cycle facilities is delayed because the time when uranium price escalation arising from scarcity will occur is not yet discernible. Therefore, interest is being focussed on whether and at what time Fast Reactors can be competitive with other nuclear reactors utilising plutonium now available in spent Thermal Reactor fuel. 78. Current experience of the fast reactor fuel cycle cost is that, given comparable scale of operation, it will be of the same order as for Thermal Reactors. 79. A breeder reactor installation is inherently more costly than a thermal reactor because, in sodium cooled designs, which are those most fully developed, intermediate heat exchange systems are needed, and the technology is more complex. Current estimates indicate that the capital cost of a production Fast Reactor may be between 140% and 170% of that of a production PWR in the UK. In France, the comparison between Super Phenix and a PWR produced in France, where the latter has already been extensively deployed, may be as much as 220%. The UK differential capital cost, assuming currently estimated fuel reprocessing prices, would be economically supported by about a tenfold increase in the price of uranium. Therefore, development and design work are directed towards improving working knowledge, and thereby reducing the size and complication of the components and producing more compact layouts. There is little doubt that improvements will result, but the inherently more complex installation will make it unlikely that general introduction of Fast Reactors will be economically attractive until much firmer pressures of demand influence an upward trend in uranium prices. CONCLUSIONS 80. The remit accepted by the working group was to identify and comment qualitatively on factors affecting the basic costs and prices of primary energy. The most dominant single factor is the price of crude oil as determined by the OPEC states which is far removed from the true cost of extraction and distribution and subject to political manipulation. The confidence needed to support investment in alternative fuels of whatever type is therefore related to judgements on how price will move and considerations of security of supply. A sustained high OPEC price would appear to encourage both extended offshore exploration for oil and development of extraction in some countries from shales and tar sands. Further advances in price would then encourage conversion of coal to oil. However, high investments are required and lead times are very long and the risks associated with changes in demand are considerable. Therefore factors other than immediate price will affect confidence and tend to influence the rate at which investment proceeds in the more costly exploration and development programmes. Recession in world demand for petroleum products is undoubtedly restraining such investment. 81. Formerly, coal commanded the market for industrial and domestic heat, electricity and towns gas. A large part of its market was taken away by cheap oil, natural gas and nuclear energyÐits position being retained in other areas because of long term investment in the complex of mines, transportation and coal burning power stations.
*Ref: CEGB Annual Report 1979/80 Appendix 3 (reproduced as leaflet “Costs of Producing Electricity”)
THE WATT COMMITTEE ON ENERGY
21
82. When cheap oil began to displace coal for electricity generation, transportation and supplementary gas production, it set a new target against which nuclear power had to compete for generation of electricity. Now oil has lost its competitive place for electricity generation and industrial and domestic heat, natural gas has taken over the industrial and domestic market previously shared with electricity. 83. The energy scene is complex and the links between primary sources of energy and final user are illustrated in the diagram of energy flow (Figure 1). It is not a prediction of demand, or of flows, but indicative of those parts of the energy flow pattern which are exposed to pressures for change. It is very relevant to such consideration to recognise that not only are the basic sources of energy, coal, oil and uranium competing for prime markets, but also, the agencies of conversion and distribution are competing with each other. For instance, coal may reach the end user as gas, liquid fuel or electricity. Gas and electricity may compete for industrial and domestic markets, and electricity may come from nuclear power as well as coal. 84. In the world scene, there will be a growing dominant demand for liquid fuel for all those forms of transport and small power generation of electricity which depend on the internal combustion engine or gas turbine. The need for chemical feedstock will supplement this demand. As oil reserves are depleted, the price which oil commands will be determined by that which the transportation and chemical industries are able to pay and this can be expected to be higher than justified for other applications where substitutes are available for direct use without further conversion. Nevertheless, the price of substitutes cannot escape some influence from the price of crude oil in world tradeÐand as crude prices increase, further exploration and development of oil fields in more difficult terrains will be stimulated and continue to provide competition. Deposits of shales and tar sands are very considerable, but outside the UK. Their development is attracting environmental objections. Nevertheless, at the OPEC prices currently ruling, large scale pilot plant experience is being obtained and an OPEC price structure rising to 850 per barrel may trigger off further production which would help to stabilise world crude oil prices. A doubling of oil price by year 2000 in real terms could enable substantial production of shale/tar sands outside UK before that date, but long lead times are required, environmental problems will arise, and the investment needed is considerable. In the UK oil self-sufficiency is seen between now and the year 1990±2000 and world prices will continue to support offshore exploration and production, subject to encouragement by fiscal policy. 85. The UK is well provided with indigenous coal and coal may be expected to revive as a world trade as oil prices rise, either artificially or as resources decrease. Coal can satisfy markets for which oil will have priced itself out of competition, but nevertheless in the longer term of several decades, it will become a source of substitute liquid fuel and then the price will fall into line as a feedstock for conversion plant and depend on the conversion efficiency. In those territories like USA and N. Australia where coal is abundant and cheaper than UK, conversion is likely to be attractive 5 to 10 years sooner. The long lead times required for development and construction of process plants are such that pilot scale work is desirable even now. The major markets for coal will be electricity generation and bulk industrial heat, with the latter having considerable potential for growth. Given all the uncertainties on the prices of internationally traded coal in the long term, the working party sees a continuing need in the UK to stimulate development of new mines and improved mining techniques. UK coal must not lose the opportunity to share world markets as demand develops for use as feedstock for conversion in, say, 20±30 years time as oil prices continue to advance. 86. Gas lies in an intermediate position between oil and coal. It is clean and simple of application and can be used efficiently. Conveyance is straightforward. It is very convenient for industrial and domestic use. It is available in the UK in natural form from gas fields which are easy of access at very preferential cost, but in a decade, supply will depend on the more distant oil fields and longer submarine pipeline. Gas will not be traded widely in world markets since shipping in refrigerated storage is costly and complicated and distribution is tending to be dictated by the economies of pipeline building and operation. This will free the price of gas in part from a relationship with that of world traded oil. Gas supply has important fall backs by conversion from oil, currently used for ªtopping upº and conversion from coal. The price commanded by oil is such that its bulk conversion is not likely to be attractive and oil itself is a convenience fuel, which if cheap enough will compete directly with gas. Conversion from coal has significant losses and appears to be relatively costly. Thus, whilst UK offshore oil development continues and submarine pipelines are feasible, the prospects for gas supply will be linked to further exploration and development for oil. Nevertheless, the same problem of lead time exists for coal/gas conversion as coal/oil conversion and the technologies are comparable. Investment and operating costs of coal conversion are substantially higher than for oil conversion, though feedstock would be cheaper, with an overall differential of about 2:1 against coal at present prices. Strategic prudence indicates the need for development of pilot plant for coal liquefaction having regard to long lead times needed to reach full commercial operation. 87. Electricity commands requirements for the driving of rotating machinery and lighting and some industrial processes, such as refrigeration, without competition.
22
INFLUENCES UPON BASIC COSTS AND PRICES OF PRIMARY ENERGY
In domestic and industrial markets, it has competition from oil, gas and coal, with gas as the most significant in terms of price and convenience. In the UK electricity depends almost entirely on coal and uranium as basic fuels, oil having been priced out of the market except for intermittent use in modern gas turbine plant and older installations now reserved for peak lopping. No steps being contemplated for improving oil supplies or providing substitutes for oil are likely to lead to the readoption of liquid fuel for bulk generation of electricity. Uranium, on present foreseeable demand and used in currently developed reactor types, will be capable of meeting world demand at competitive costs of generation for several decades. Currently uranium prices are falling due to low demand associated with environmental concern relating to radioactivity and especially disposal of long lived radioactive wastes. Acceptable short and long term solutions should be forthcoming within the next decade and it is possible that present exploration is inadequate to support an upturn in demand when confidence is regained. The Fast Breeder Reactor, if developed, is capable of improving utilisation of uranium at least tenfold, but economic justification depends on the future escalation of cost of uranium. Very long lead times and substantial investment in demonstration plant are needed to produce a capability important at the turn of the century. Whilst electricity currently generated from nuclear power is reported by the Central Electricity Generating Board as cheaper than from coal at today's price, it requires a greater level of capital investment in the power station. Even so, the margins are such that for the electricity market, uranium will remain in competition mainly with coal. Both will have environmental problems, and both have differing impacts from escalation of capital and fuel costs. Those of coal are related to location of new mines and further reduction of atmospheric pollution. Those of nuclear power are related to location, long-term disposal of wastes and less directly related issues associated with proliferation of nuclear weapons. The trend will continue towards a combination of coal fired and nuclear power stations, the choice being influenced by experience of both price and political factors. Electricity will continue to be in competition with natural gas for the domestic and industrial heat market. UK gas prices will increase as dependence moves, firstly to the development of offshore oil fields and secondly to conversion from coal. At the latter stage, nuclear based electricity may effectively compete with substitute natural gas derived from coal for the domestic and industrial heating market. 88. The development of renewable sources of energy will play some part in meeting demand, but cannot be exploited sufficiently in the view of the working group, to dominate markets and thereby influence fuel costs and prices significantly in the next 50 years. Energy conservation whilst very important is not likely to have a large enough impact to influence the price of basic fuels. 89. In essence, the trends anticipated by the working group over the next fifty years for the UK are:± 1) Continuing OPEC pressure to dominate world supply and to sustain oil prices well above production cost and increasing faster than inflation. 2) Steady progress in exploration and development of oil, on and offshore and application of enhanced recovery techniques. 3) Some progress overseas towards oil recovery from tar sands and shales but with little early effect on world market price of crude oil and insignificant impact on the UK. A doubling of world crude oil market price could result in the opening up of substantial reserves albeit requiring considerable investment and lead time and therefore a high level of confidence in a sustained market price. 4) Exploitation of natural gas regionally within the general scope of overland and submarine pipeline at prices related to the generally acceptable level for energy at the point of delivery. 5) Progress towards replacement of less efficient coalfields by better yielding and more effectively worked new mines-made relatively slow by resolution of environmental issues. The development is likely to be in the context of a developing world market for coal as a replacement for oil for many applications prior to the adoption of coal/oil conversion. 6) Pilot plant development at production scale for both substitute liquid fuel and natural gas from UK coal. 7) The use of both coal burning and nuclear power stations for the next 20 to 30 years for the replacement of old plant and extension of capacity of the electrical system, thus sustaining pressure on coal production, uranium exploration and nuclear waste disposal. 8) Development leading to a full scale Fast Reactor with breeding capability at or after the end of the century. 9) Coal and uranium continuing to compete as the primary energy resources for producing electricity. Nevertheless, when substitute natural gas is derived from coal, the cost will be such that nuclear based electricity may be sufficiently competitive to regain a portion of the industrial and domestic premium market. At that time, the price of oil, whether converted from coal or extracted from shale and tar sands, will limit its economic use to transportation and chemical feedstocks.
Some approximate conversion factors
PREFIXES T G M k
= = = =
tera giga mega kilo
= = = =
1012 109 106 103
(trillion) (billion) (million) (thousand)
WEIGHTS & VOLUMES General 1 kg=2.205 Ib 1 cubic metre (m3)=35.3 cubic ft Crude oil 1 barrel=35 Imperial gallons=42 US gallons 1 tonne=7.3 barrels 1 million barrels per day (Mb/d)=50 million tonnes per year Natural gas 1 trillion cubic feet (TCF)=27 billion cubic metres (Gm3) ENERGY General
1 British Thermal Unit (BTU)=1056 Joules (J) 1 Therm=100 000 BTU 1 kilowatt hour (kWh)=3412 BTU=860 kilocalories (kcal) CALORIFIC EQUIVALENTS· 1 million ton nes coal = 26 trillion BTU = 260 million Therms = 6600 tera calories
Heat units
= 0.67 million tonnes oil = 0.78 thousand million m3 =27.5 thousand million cubic feet = 75 million cubic feet/day for a year = 8 thousand million kWh*
Natural gas
*1 million tonnes of coal produces about 2700 million kWh in a modern power station. · Calorific properties of coal and oil from different places vary considerably. These figures are composed from statistical sources in a number of technical publications.
Part 1: Appendix Energy costs report
A.Cluer,* Consultant Institute of Petroleum Prepared for The Watt Committee Costing Working Group in August 1980 with the collaboration of members of the Institute of Petroleum.
* Executive member of The Watt Committee on Energy
Energy costs report
1. Background history of petroleum industry: oil and natural gas consumption trends The modern petroleum industry is generally considered to have started in 1859, when the first well ever drilled for the specific purpose of locating oil was successful at Titusville, Pennsylvania, close to natural seepages, at a depth of only 69 feet. This development quickly spread in the USA and to other countries, notably the Caucasus in Tsarist Russia, where the great oilfields of Baku were found in the 1870s. By the end of the following decade, these surpassed for a short period the output of the USA. (Since 1974 crude oil production in the USSR has again exceeded that from the USA-BP Statistical Review of the World Oil Industry 1979). For the next 40 years there was a steady increase in the output of crude petroleum. Its primary use was in the production of an inexpensive illuminant, although its versatility was such that by the 1890s more than 200 other derivatives accounted for at least half the volume of the industry's tot al sales. With the turn of the century and the growing use of the motor car, more emphasis was laid on the production of gasoline, and the invention of the Dubbs thermal cracking process in 1914 made possible the production of much greater quantities of gasoline than could be produced from crude oil by distillation only. Use of residual fuels started to displace coal as a marine fuel, and the use of natural gas also increased substantially. The growing importance of oil and gas as a source of commercial energy during the 50 years up to the energy crisis of 1973/74 is demonstrated in the following table, together with the figures for 1979 to illustrate any change since then. Table 1 Changing pattern of world commercial energy demand expressed as percentage of total demand Solid fuels
Oil fuels
Natural gas
Hydro electricity
Nuclear electricity
1920 86 9 2 3 ± 1930 75 17 5 3 ± 1940 69 21 6 4 ± 1945 60 25 10 5 ± 1950 52 32 10 6 ± 1960 36 41 16 7 0.03 1970 23 (32) 52 (44) 18 (18) 6(6) (±) 1974 19 (28) 53 (46) 18 (19) 7(6) (1) 1979 ± (28) ± (45) ± (19) ± (6) (2) Data source for Table 1: Financing the International Petroleum Industry. N.A.White: Graham & Trotman 1978. Figures in brackets from BP Statistical Review 1979. Notes Figures exclude USSR, Eastern Europe and China, except for 1970, 1974 and 1979 when figures in brackets include these countries and refer to primary energy consumptions. The ªsoli d fuelsº in 1970,1974 and 1979 bracke ted figures include commercial solid fuels only, i.e. bituminous coal, anthracite and lignite/brown coal.
In terms of actual tonnages of petroleum products and natural gas consumed, the following totals are indicative of the rapid growth rates experienced up till recently, worldwide and for USA, Western Europe and UK.
26
ENERGY COSTS REPORT
Table 2 Consumption of oil products and natural gas (million tonnes of oil or oil equivalent) Year World
USA Oil
Gas
W.Europe Oil
1938 257 153 1950 532 321 1960 1046 434* 384 1965 1543 648* 553 1970 2284 956 695 1971 2411 1024 719 1972 2582 1048 776 1973 2787 1081 818 1974 2743 1114 783 1975 2712 1108 766 1976 2893 1160 822 1977 2978 1202 866 1978 3083 1231 888 1979 3120 1297 863 Data sources: 1938-BP Statistical Review 1967 1950±65-IP Information Service ª World Statisticsº Nov. 1970-oilonly 1970±79-BP Statistical Review 1979 *1960±65-Our Industry Petroleum, BP 1977-gasonly
UK
Gas
Oil
338* 432* 564 584 587 572 555 509 516 502 504 499
36 62 196 387 627 656 702 749 699 664 710 697 715 727
Gas
Oil
Gas
11* 20* 73 93 114 130 147 154 164 170 179 186
11 18 47 73 104 104 111 113 105 92 91 92 94 94
0.1* 0.7* 11 18 25 26 32 33 35 37 38 41
The world oil consumption rate by areas is also depicted graphically in Figure 1 (reproduced from BP Statistical Review, 1979, page 9) in which consumption in the OECD countries is shown separately from the rest of the world. It can be seen that after a period of reduced demand in 1973/75 in all OECD countries, the growth rate was resumed but at a slower pace, and in 1979 actually declining: whereas in the rest of the world, accounting for over one-third of the total consumption, the growth rate has hardly been affected by Middle East events at all. 2. World reserves of oil and gas, including oil shales and tar sands and their locations and current production rates 2.1. Crude oil Oil is a non-renewable resource with a finite limit. It has been formed, has moved and been trapped in many different places and at different geological times, so that no-one can tell with complete certainty what the ultimate reserves of oil in place may be. Attempts at estimating the world's ultimate reserves of oil continue to be made; a generally accepted range at present for ªconventional º oil, i.e. crude oil which can be produced by fluid flow from conventionally drilled wells, is 1500±2000 billion (109) barrels (200±270 billion tonnes)± BP Briefing paper 78/03. Because of the nature of the rock or sand in which the oil has accumulated, it is not practicable or economic to extract all the oil originally in place. `Proven reserves' are defined as the volume of oil remaining in the ground which geological and engineering information indicate with reasonable certainty to be recoverable from known reservoirs under current economic conditions and by existing operation techniques. Recovery factors may be as low as 20% of the oil in place and methods to improve the proportion of oil extracted, known as secondary and tertiary (or enhanced) recovery techniques may be practised when it is or becomes economical to do so. An average recovery factor for North America is 30±35%, and for the North Sea using water injection techniques of secondary recovery from the outset, perhaps 40%. Estimates of the world's proven reserves are being continually revised as new information develops. Until recently, these revisions have always been upwards, in spite of rapidly increasing world consumption. Thus, from an estimated world proven oil reserve total of 24 billion barrels (3.3 billion tonnes) in 1935, the figure rose to 298 billion barrels in 1960, 617 in 1970,
THE WATT COMMITTEE ON ENERGY
27
Figure 1 World oil consumption 1971–1979
and 666 in 1975 (IP Information Services booklet ªPetroleum Reservesº March 1978); however, at the end of 1979, the figure had dropped to 649 billion barrels. For about the first 70 years of the petroleum industry, the USA had the bulk of the world's proven oil reserves and was the main producer. From 1918, when statistics became more reliable, to 1954 the USA provided each year over 50% of the world's production of crude oil. Since 1954 this share has declined and in 1974 the USA provided only 18% of world production from 5.5% of the world's published proven reserves. The big shift has been towards the Middle East, which in 1979 had 56% of the reserves and 33% of the production, as illustrated in the following table. Table 3 World published proven oil reserves production Country/area Reserves (end 1979)
Production (1979) 109
USA Canada Middle East Latin America Far East & Australasia Africa W. Europe (UK only) USSR, East Europe & China
barrels
32.7 8.1 361.82 56.53 19.4 57.1 23.6 ± 90.05
106
tonnes
4200 1100 49200 7900 2600 7600 3200 (2400)4 12200
%of total
000 barrels daily
106 tonnes
%of total
5.0 1.3 55.7 8.7 3.0 8.8 3.6 ± 13.9
102101 1830 21795 5525 2885 6670 2365 (1600) 14410
483 86 1076 280 145 324 116 (78) 712
15.0 2.7 33.4 8.7 4.5 10.1 3.6 (2.4) 22.0
28
ENERGY COSTS REPORT
Country/area Reserves (end 1979)
Production (1979) 109
barrels
106
tonnes
%of total
000 barrels daily
106 tonnes
%of total
World Total 649.2 88000 100.0 65710 3222 100.0 Data sources: BP Statistical Review 1979 IP Information Service Booklet March 1978 Notes 1 Includes 1675 10 3 b/d natural gas liquids 2 The Saudi Arabia content of this figure is about 163, i.e. this one country contains some 25% of the world's published prove n reserves
3 Includes Mexico at 31.3 10 9 bbl (increased from 13.5 10 9 bbl in 1978). In ªLatin America and Caribbean Oil Reportº published by Petroleum Economist in 1979, the estimate of proven reserves at the end of 1978 was given as 40 10 9 bbl, with a further 45 10 9 bbl ªprobableº reserves, indicating in this case discoveries not yet in production. ªPotentialº reserves were estimated as 200 10 9 bbl, but Mexico combines oil and gas in its reserves estimates. According to Oil and Gas Journal 31 December 79, Mexico's gas reserves (recalculated as oil equivalent) were about 25% of combined oil and gas reserves. 4 The 1980 `Brown Book' (Development of the oil and gas resources of the United Kingdom 1980, HMSO) gives the following breakdown of oil reserves in present discoveries in the UK Continental Shelf, remaining at the end of 1979:± Proven Probable Possible Possible total
1200 million tonnes 625 million tonnes 575 million tonnes 2400 million tonnes
Total estimated reserves are given as 2200±4400 million tonnes taking into account possible future discoveries, and noting that about half of the 500±1050 million tonnes in this total of possible future discoveries in areas not yet licensed may be in water depths of more than 1000 feet. In the Brown Book, `proven' reserves are those which on the available evidence are virtually certain to be technically and economically producible. `Probable' reserves are those which are estimated to have better than a 50% chance of being technically and economically producible. `Possible' reserves are those which at present are estimated to have a significant but less than 50% chance of being technically and economically producible. 5 The USSR and China do not publish reserve figures. O & G J 31 December 79, which is the source of these figures in the BP Statistical Review 1979, states that the USSR figures are ªexploredº reserves, which included proved, probable and some possible reserves. 2.2 Natural gas The estimated published proven world reserves of natural gas at the end of 1979, together with 1979 consumption rates, are listed in the following table. Table 4 World natural gas reserves and consumption Country/area
Reserves (end 1979)
TCF1
Consumption 1979
Million tonnes oil equivalent
%of total
Million tonnes oil equivalent
%of total
USA Canada Middle East Latin America Africa W. Europe
194.9 88.1 739.8 144.5 210.4 135.9
4678 2114 17755 3468 5050 3262
7.6 3.4 28.7 5.6 8.1 5.3
499 49 30 44 9 186
38.4 3.8 2.3 3.4 0.7 14.3
THE WATT COMMITTEE ON ENERGY
Country/area
Reserves (end 1979)
TCF1
Million tonnes oil equivalent
%of total
Million tonnes oil equivalent
%of total
(53.4)3
(1282) 21600 3917 61824
(2.1) 34.9 6.4 100.0
(41) 307 175 1297
(UK only) USSR 900.0 Other Eastern Hemisphere 163.2 World total 2576.8 Data source: BP Statistical Review 1979 Notes 1 TCF=trillion (10) cubic feet. 2 Converted from TCF at 1 TCF=24 10 6 tonnes oil. 3 UK reserves figure from 1980 `Brown Book'.
29
Consumption 1979
Proven remaining reserves Probable reserves Possible reserves Possible total
26.6 11.4 15.4 53.4
(3.2) 23.7 13.4 100.0
TCF TCF TCF TCF
It will be seen from Tables 3 and 4 that world consumption of natural gas is approaching half that of crude oil, and that at current consumption rates crude oil reserves would last some 30 years and natural gas reserves, say, 50 years. However, Table 4 shows that although as in the case of crude oil a large proportion of the natural gas reserves are in the Middle East, even more is located in the USSR, and it cannot be expected that USSR gas reserves will be available to the non-Communist world. In any case, these simple calculations of duration of reserves represent gross over-simplification of the position. For crude oil, unless the rate of new discoveries can compensate for the decline in production from existing fields, production must start to fall. To maintain present world production levels would mean finding every year as much oil as has been found in the North Sea in the last 10 years, or two Alaskas; and the prospects of such discoveries are receding. A drilling boom is gathering pace in the USA, which while falling well short of the industry's view of what is needed fully to exploit large untapped reserves, is clear indication of a protracted effort to make up for some of America's crude oil shortfall. However, although the rate of decline in production is expected to be considerably reduced between now and 1990 as a result of this effort, a production of 6±6.5 million b/d is now forecast for 1990 compared with 8.5 million b/d in 1979 (v. Table 3, Note 1)ÐPet. Econ. July 1980 p. 291. Thus, for the USA there is no alternative to continuation of imports or establishment of a syncrude industry. The case of natural gas is complicated by the fact that transport from the producing to consuming areas (other than by pipeline principally inside the producing areas' own domain such as in the USA and USSR, and now also in the UK) involves expensive liquefaction plant and ocean transport in special tankers. It has been estimated (SBS ªEnergy and the Investment Challengeº Sept. 1979) that a typical LNG export project equivalent to 150,000 b/d of oil could cost $4 billion in today's terms or $27,000 per barrel per day, and that 20 or 30 new LNG projects in addition to the 12 now operational or in preparation could be in operation by the year 2000. This would give a total availability of natural gas in line with expected future demand, but at a premium price in line with high cost crudes (see Section 4). It also assumes the political incentive of the Middle East to invest and trade, as discussed in Section 3. 2.3 Shale oil The following table, taken from BP Briefing Paper 78/03, shows the location of estimated shale oil in place, whether deemed recoverable or not. Table 5 Country/Area
Billion barrels
USA (Colorado, Utah, Wyoming and Alaska Brazil USSR Zaire Other areas having smaller and less promising deposits
2200 800 115 100 0±800
30
ENERGY COSTS REPORT
Country/Area
Billion barrels
Total approximately Note l including an estimated 1.58 billion barrels in the UK.
3000±4000
(say 400±500 billion tonnes)
This total is about six times the proven world reserves of conventional crude oil. However, less than 10% of this total is thought to be ultimately recoverable and probably only 2% could be won with present technology. In the UK, the small Scottish shale oil industry was shut down in 1964, as the cost of mining and retorting made production uneconomic in relation to crude oil prices. At present, significant shale oil operations exist only in the USSR, China, Brazil and the USA, and the five major developments in North America have progressed little beyond the pilot stage. 2.4 Heavy oils and tar sands These are essentially crude oils which have been depleted of their more volatile components. Deposits of heavy oil including tar sands are too viscous to be economically extracted on a large scale by present techniques. They exist principally in Venezuela (the Orinoco Heavy Oil Belt), Canada (Alberta), USA and USSR. The reserves amount to an estimated total of 3000±5000 billion barrels of hydrocarbons in place, i.e. rather more than for shale oils. Of these reserves, Venezuela is thought to contain approximately two thirds, while most of the balance is located in Alberta and USSR. The Athabasca deposit in Alberta underlies some 9000 square miles of territory, at depths from surface to about 1900 feet. It is up to 300 feet thick and contains about 10% by weight of viscous oil; the total estimated oil in place is 600 billion barrels. At 40% recovery (perhaps an optimistic figure), the Athabasca tar sands hold oil equivalent to roughly half the proven world reserves of conventional oil. Smaller deposits estimated at 164 billion barrels of oil in place exist in the Cold Lake area of Athabasca, underlying 3500 square miles at depths from 900 to 1600 feet. 3. Formation of OPEC (Organisation of Petroleum Exporting Countries) and oil price increases 1973/74 and subsequently (reference N.A.White book p 19 et seq, and other sources as noted) OPEC was formed in September 1960 by Venezuela together with Iran, Iraq, Saudi Arabia and Kuwait, with the prime objective of raising the price of crude oil for the benefit of the oil producing countries. It has since been expanded by the accession of Abu Dhabi (now with Dubai and Sharjah the United Arab Emirates), Libya, Algeria, Nigeria, Indonesia, Qatar, Ecuador and Gabon. The published proved oil reserves of the 13 OPEC member countries at the end of 1979 was about 64% of the world's total reserves. The corresponding figure for natural gas was 37%, of which over half is in Iran. (O. G. J. 31 December 79). The initial spur to the formation of OPEC came from the long down-trend in crude oil prices because of competition and lower realisations in the consuming countries, starting around 1957. However, in the early 1950s, profit-sharing agreements on a 50/50 basis had already been entered into between producer country governments and concession-holding companies, based on the concession-holder's posted price for crude oil . In 1962 OPEC published resolutions calling for negotiations with the international oil industry to raise crude oil prices and so increase their revenues; but throughout the 1960s, there was a surplus of crude oil availability over demand and the existence of OPEC made no significant difference to the situation. By the beginning of the 1970s, however, world crude oil demand had more than doubled since 1960 (v. Table 2) and the narrowing balance between supply and demand enabled OPEC to pursue its aims. A series of agreements was reached between the companies and member governments of OPEC, providing for growing revenues over a 5 year period and increasing the percentage participation of host governments over a 10 year period. These agreements were overtaken by the unilateral actions taken by OPEC from 1973 onwards. The combined effects on oil prices and producer government revenues brought about by the various agreements reached by the oil industry with OPEC over the period 1970±73 are summarised in the following table. The consequences are also shown of the subsequent unilateral actions by OPEC, triggered off by the short Arab-Israeli war at the end of 1973, and continuing to early 1979, during which period 34 API Light Arabian crude was acc epted as OPEC's ªmarkerº c rude.
THE WATT COMMITTEE ON ENERGY
31
Table 6 34 API Light Arabian crude, price ($ per barrel) & Govt. revenue
Political events in Iran at the end of 1978 and the beginning of 1979 led to strikes and reduction of their 6 million b/d oil output to well under 1 million b/d for some months. By the time that production was restored in the middle of 1979 to about half its previous rate, a flourishing black market had developed, with spot cargoes of Middle East crudes selling for up to $38 per barrel, as consumer countries scrambled to increase their stocks against the threat of reduced production by other OPEC states. The moderating effect of Saudi Arabia as the largest OPEC producer (8.5 million b/d out of 30 million b/d, including 21 million b/d from the Middle East) was no longer sufficient to compensate for the Iranian shortfall of 3 million b/d. As a consequence, Light Arabian crude ceased to be recognised as OPEC's marker, and at OPEC's Geneva meeting in June 1979, the reasonable price increases agreed in December 1978 for each quarter of 1979, as shown in Table 6, were abandoned. Instead, compromise agreements were reached between member countries whereby Light Arabian crude price was increased to $18 per barrel, with differentials for premium quality, location, etc., for other crudes resulting in prices up to $22 per barrel. The next table shows how the official sales prices of various OPEC crudes have increased rapidly during the last 18 months, as the thirteen delegations failed at successive meetings to agree on a unified structure of prices. Table 7 OPEC crude oils-Official sales prices, $ per barrel Crude oil
Dec 78
Jan 79
Jul 79
Nov 79
Jan 80
Apr 80
Jul 80
34 Light Arabian 12.70 13.34 18.00 24.00 37 Abu Dhabi 13.04 13.78 21.46 27.46 44 Algerian 14.10 14.80 23.50 33.00* 34 Light Irania n 12.81 13.45 22.00 28.50 31 Kuwait 12.22 12.83 19.49 20.49 40 Libyan 13.85 14.59 23.45 26.22 37 Nigerian 14.12 14.82 23.49 26.26 34 Venezuelan 13.99 14.87 22.45 22.45 *Including special exploration surcharge of $3 Data source: Petroleum Economist, appropriate issues during 1979/80.
26.00 29.36 33.00* 30.37 27.50 34.67 29.99 28.75
28.00 29.36 37.21* 35.37 27.50 34.67 34.71 30.75
28.00 31.36 37.00 35.37 31.50 37.50 (approx.) 37.02 34.85
Saudi Arabia has attempted to reduce spot market activity by stretching its production to the limit of 9.5 million b/d, and with now adequate stockpiles in consumer countries and reduced demand due to industrial recession and conservation, has succeeded to a considerable extent. However, Arabian attempts to stabilise crude oil prices by increasing its 34 API crude from $18 to $24 per barrel in advance of the OPEC meeting in December 1979 did not succeed, and they subsequently increased their price by another $2 per barrel. Leap frogging resulted, with the price confusion and instability that is evident from the table.
32
ENERGY COSTS REPORT
In a further attempt to restore some stability, Saudi Arabia refused to increase its price by further $4 to $32 per barrel as agreed by the majority of the others at the June 1980 OPEC meeting, and will review the position in September. In the meantime, it will maintain its production at 9.5 million b/d (Pet Econ July 80 p 279) until a unified price structure emerges. It is now generally accepted that many OPEC producers will curtail their outputs to conserve supplies and lengthen the time span over which they can command higher returns. Whereas OPEC production had been previously expected to increase to perhaps 50 million b/d by 2000, it is now clear that the present level of around 30 million b/d is at or near the maximum that will be produced, and even this is liable to political ªaccide ntsº. Apart from the sovereign right of each OPEC member to decide its own production level, a price policy committee of OPEC is said to favour the indexing of oil prices to take account of dollar depreciation and of inflation in the industrialised countries. In addition, there would be regular increases in the real price to bring it up to equality with the cost of other forms of energy. How this is to be calculated is by no means clear, but it was reckoned in February 1980 to be $30 to $50 a barrel. (Pet. Econ. Feb 1980 p 47). 4. Costs, availability and lead times of producing conventional crude oils and synthetic crudes from shales, tar sands and coal The estimated capital and production costs for conventional and synthetic crude oils, and the timing of potential impact, are given in Table 8 below. The figures represent April 1980 estimates by Shell and therefore take into account an element of escalation due to rising energy prices, although no analysis of this element is available. It is emphasised that these estimates are subject to change over time for a variety of reasons, particularly in the case of syncrudes (e.g. the figures for syncrude from coal in N.W.Europe have increased by 50% in the past year). It is suggested that North Sea capital cost may be up to $25000 per peak daily barrel by 2000, and syncrude estimates may prove unduly optimistic or even fall through the successful implementation of new processes. Nevertheless, the broad relationships are considered likely to remain valid. The costs are given in 1980 US dollars, and the production costs for syncrudes are based on a commercial rate of return (12% earning power with 100% equity). The information in Table 8 is commented on and supple Table 8 Capital and production costs ($1980) and lead times, for conventional and svnthetic crudes
Conventional Middle East Middle East (delivered NW Europe) North Sea oil Syncrudes ex: Shale (US), tar sands and heavy oil Coal (US) Coal (NW Europe from imported coal)
Estimated capital cost, $ per barrel Estimated production cost, $ per daily oil equivalent barrel of oil equivalent
Timing of potential impact
1000±3000 3000±6000 4000±1 5000
1±3 3±5 4±13
current current current
15000±30000 40000±55000 50000±70000
15±30 35±50 45±65
From mid 80s From) early 90s)
mented as follows, using information from relevant Shell Briefing Service and BP Briefing Papers, Petroleum Economist and other articles as noted. 4.1 Conventional crude oil Approximately 2 million wells have been drilled since the Titusville discovery well of 1859, and the great majority of these have been ªdry holesº (IP Inf. Service ªDrilling & Productionº 1975). These exploration costs are all part of the final cost of oil to the consumer. Once a commercial oil or gas field has been located, the cost of production varies widely with the size and nature of the field, and the measures then necessary to deliver the oil or gas to the consumer. The rate of flow of oil and gas from a well depends on such factors as the properties of the reservoir rock, the underground pressures, the viscosity and character of the crude oil, and clearly the area and thickness of the pay zones.
THE WATT COMMITTEE ON ENERGY
33
A striking indication of the difference in production effort required can be gauged from the fact that Middle East fields have produced a total of about 105 billion barrels since the early years of the century from about 3000 wells, many of which flow at over 10,000 barrels per day, whereas in the USA 1.4 million wells have been drilled at the rate of between 13000 and 58000 each year since 1918, for a cumulative production of about 115 billion barrels. 900,000 of these wells have been shut up or abandoned. The productivity of producing wells worldwide in 1979 is illustrated in Figure 2 (reproduced from BP Statistical Review 1979 p 7), which shows not only the extreme cases of Middle East at 5500 b/d per well and USA at 20 b/d per well as just mentioned, but all the intermediate cases, e.g. Africa (largely Nigeria, Algeria and Libya) at 1520 b/d per well and W. Europe (largely UK and Norway) at 320 b/d per well. Depending on the destination of the crude oil, it is frequently necessary to construct long pipelines to transport the crude oil from the fields to a shipment terminal, e.g. the long Middle East pipelines to the Mediterranean coast, and the recent discoveries in the Peru and Ecuador jungles East of the Andes, via trans-Andean pipelines to Pacific coast ports. Such crudes also attract the costs of transporting field equipment to the difficult jungle sites. From the above brief discussion, it is evident that crude oil production costs will be significantly different in different parts of the world. Taking Middle East crude as the yardstick, a capital cost of $1000±$3000 per daily barrel at peak production rate is indicated in Table 8, with a production cost of $1±$3 per barrel. For delivery of the same crude to NW Europe, the extra expenditure on tanker and port facilities raises the capital cost to $3000±$6000 per daily barrel with a delivered cost of oil at $3±$5 per barrel. As regards offshore production, it is to be expected that this is more expensive than onshore, due to underwater drilling and production techniques necessary. Early experience in the shallow waters of Lake Maracaibo (Venezuela) and offshore Louisiana was put to good use in the mid-1960s production of natural gas from the Southern basin of the North Sea, using jack-up drilling rigs in 100±200 feet of relatively calm water.
34
ENERGY COSTS REPORT
Figure 2 Productivity of producing wells 1979
Subsequent developments in oil exploration and production in the hostile waters of the Northern North Sea in 400±600 feet of water sent costs rocketing upward. The following figures were quoted in 1974 for then current North Sea equipment and operating costs (BaxendellÐThe Technology of Offshore Exploration and DevelopmentÐIP Conference at Aviemore May 20±23 1974. ªThe Impact of Offshore Oil Operationsº). Land exploration well Southern North Sea exploration well Northern North Sea exploration well Jack-up drilling rig for 200 ft water depth Semi-submersible drilling rig for 600 ft water depthc Production platform installed without facilities, for 500 ft water depth 30" pipeline in 100±150 ft of water 36" pipeline in 500 ft of water
£ million (1974) 0.25 0.5 1±2 8 18 25±40 0.3 per mile 0.8 per mile
Rapidly escalating costs since 1974 has meant that even these expenditures have been exceeded by a considerable margin. For example, the capital cost of a production platform installed without facilities for 500 ft of water depth would now (1980) be £200m, and a 36" dia. submarine pipeline £1.4 million per kilometre (£2.2 million per mile). [Gas gathering pipeline systems in the North Sea. Energy Paper No 30, May 1978, p 31 (+15%)]. As indicated in Table 8, today's production costs for North Sea oil are estimated to be in the range of $4 to $13 per barrel. Neither the easily won conventional crudes such as those from the Middle East, Mexico and Africa, nor the more difficult crudes such as those from the smaller offshore fields and deeper onshore complex structures, attract the very high costs
THE WATT COMMITTEE ON ENERGY
35
associated with enhanced recovery of synthetic crudes from other fossil sources which are discussed shortly. Conventional crude may thus be classified into `low cost' (e.g. Middl e East) and `medium cost' (e.g. North Sea). It must be realised that the investment costs associated with future production of conventional crudes will increase in real terms when the more attractive structures have been developed, so that by the year 2000 it is expected that the average capital expenditure for `low cost' oil will have trebled to $6000 per barrel per day, and for `medium cost' oil to $15,000 or more, with consequent substantial increases in production costs. (The Magnus field in the North Sea, due to commence production in 1983, is likely to cost $19,000 per barrel per day at peak production rate). It should also be stressed that these figures are all in 1980 money, i.e. they do not allow for inflation. 4.2 Shale oil Most oil shales are composed of Kerogen (a solid honey coloured mixture of organic compounds) mixed or combined with mineral materials. The shale is mined and subjected to destructive distillation in retorts at temperatures in excess of 500 C, to recover crude shale oil. A number of consortia in the USA have built pilot plants of about 1000 tons of shale per day capacity. Estimates of shale oil yield vary from 27 to 35 gallons per ton of rock processed. The processing scheme uses potential product as a heat source, which loses about 40% of the potential yield at the retorting stage alone. Because of these inefficiencies and because of the environmental disturbances associated with open-cast mining and disposal of large quantities of spent shale, in-situ recovery techniques have been investigated. These involve initial break-up of the shale to allow for circulation of air and combustible gases, followed by air injection, controlled burning and finally extraction pumping of the liberated oils and gases. A wide variety of fracture methods has been tried, but the results have been disappointing in economic terms. As shown in Table 8, current estimates of shale oil production costs range from 815 to 830 per barrel of oil and no commercial impact can be expected until after 1985. According to BP, large scale recovery of shale oil is unlikely until after the feasible exploitation of heavy oils and tar sands, although local factors could affect the pattern. Nevertheless, several new developments in the USA have been indicated in recent weeks (Pet Economist Jan 1980 pp 12± 14 and July 1980 p310), including:± a) a project by Union Oil of California to produce 10,000 b/d of shale oil by the end of 1982 and 50,000 b/d by 1990, at an estimated capital cost of 81.5 billion, corresponding to $30,000 per daily barrel; b) an announcement last month by Standard Oil of California to construct a shale oil demonstration unit in 1982 if it received permits. The project could be expanded to produce 50,000 b/d by 1988 and 100,000 b/d in the 1990s (cost not indicated); c) contracts entered into by the Dept of Energy with two companies, Superior Oil and Paraho Development, for the design of oil shale demonstration units of about 12,000 b/d oil, each design costing about $7 million of which $4±$5 million will be government money. A joint project by Exxon and Tosco (The Oil Shale Corporation) plans a 46,000 b/d plant in Colorado to be on stream in late 1985 at a cost of 81.7 billion (1980 dollars), corresponding to $36,000 per daily barrel. The present US target is to produce 400,000 b/d shale oil by 1990, which seems much more realistic than their previous target of 300,000 b/d by 1985. An incentive for the industry was proposed by the 83/barrel tax credit included in the so-called windfall profits tax legislation approved in April 1980. This tax credit is not available however after world oil prices reach 829.50/brl (1979 dollars), plus an inflator. 4.3 Heavy oils and tar sands Tar sand deposits within about 200 feet of the surface can be recovered by strip mining. These probably represent less than 10% of all heavy oil deposits. Heavy oils deeper than about 500 feet need to be recovered by in-situ methods akin to those used for enhanced recovery of conventional oil (see later). At present, deposits in the 200±500 ft range cannot be recovered at allÐthey are too deep for strip mining and the overlay is generally too thin for in-situ recovery. A plant 250 miles N of Edmonton strip mining Athabascan tar sand has been in operation since 1967 by Great Canadian Oil Sands. The oil is extracted by treatment with alkaline hot water and the product upgraded by coking and hydrogenation. Output is about 45,000 b/d, and the plant has operated at a loss in every year but one. A second plant, Syncrude Canada Ltd, is owned jointly by a consortium of oil companies and federal and provincial government agencies, and started production in the summer of 1978. Due to various problems, it has been operating at half
36
ENERGY COSTS REPORT
capacity but is scheduled to produce 125,000 b/d by 1980 (Sunday Telegraph 29th April 1979). The capital cost for this plant escalated from $280 million in 1964 to over $2 billion now. At 100,000 b/d production, the per barrel cost is stated by Syncrude to be $15±20. Permission to erect a third plant (Alsands, 140,000 b/d) is being negotiated by a consortium headed by Shell. This could be in production by the middle to late 1980s. A further similar size project is under discussion by Esso Resources Canada. Each of these projects is expected to cost 85±6 billion (Pet. Econ. Feb. 1980 pp 65±70). A large number of schemes are under way in Canada to investigate different aspects of in-situ technology, which overcomes to a large extent the environmental disadvantages of strip mining. Imperial Oil has announced a 84 billion project to produce by in-situ methods 1 billion barrels of heavy oil over 25 years from the Cold Lake deposits, with a target production date of 1985. Peak output of upgraded 32 API syncrude would be 145,000 b/d. In Venezuela, Petroven (the Venezuelan State oil company) is reported to be producing 150,000 b/d of heavy oil from the northern fringe of the Orinoco Belt. Experimental schemes for steam injection and in-situ combustion are under way. The costs of producing syncrudes from tar sands and heavy oils are indicated in Table 8 to be in the range of 815 to $30 per barrel, as for shale oils. 4.4 Synthetic crude oil from coal The world's coal reserves are very much higher than its oil or gas reserves. Recent estimates show world reserves of economically recoverable coal at 677 billion tonnes, sufficient for about 200 years at present consumption rates. Ultimately recoverable reserves could total 2500 billion tonnes if higher recovery rates can be achieved economically. About 60% of the world's economically recoverable reserves are in the USA, USSR and China. Technically recoverable coal reserves in the UK are estimated at 45 billion tonnes, enough to support the current rate of production for over 300 years, and several times greater than all the oil and gas in the North Sea fields (Coal for the future: Progress with Plan for Coal and Prospects to the year 2000. Dept. of Energy Publication S236214/MP). There are currently three categories of process for producing oil from coal:± a) Synthetic gas route. This process reacts coal with steam and oxygen to give carbon monoxide and hydrogen, which are then recombined using the Fischer Tropsch process to give gasolines, diesel oils and other compounds. A full scale plant has been in operation at Sasolburg in S. Africa for over 20 years producing 4500 b/d of gasoline. A second plant to produce over 30,000 b/d of gasoline with other products is scheduled for completion in 1980/81 at a cost estimated in 1975 at about $2 billion. These operations are believed to be uneconomic at present, but are supported by the S. African government for strategic reasons. A third plant is planned, and the economics are improving rapidly with rising world oil prices. b) Hydrogenation route. Hydrogenation processes involve the addition of hydrogen to coal under heat and pressure with or without the use of a catalyst. In one scheme (the Exxon Donor Solvent process), the coal is ground, slurried with hydrogenated recycle solvent produced in the process, and pumped into a liquefaction reactor at high temperature and pressure, to produce gas, raw liquids and a heavy bottoms stream containing unreacted coal and mineral matter.. After separation by distillation the solids and other streams are used for hydrogen production, the ultimate source of the hydrogen being steam. The hydrogen requirement is 3±4% by weight of the coal, and is introduced via the hydrogenated recycle ªdonorº solvent. The process produces naphtha and fuel oils, and the yield of liquid products is around 3 barrels per ton of coal, being completely self-contained in energy requirement and hydrogen production. This process is at the stage of a pilot unit in Texas costing 8350 million to liquefy 250 tonnes per day of coal which has started up in the last few weeks. (Pet Econ. Aug. 80 p 346). A pioneer plant to liquefy 10,000 tonnes per day of coal is under study and this could be in operation by 1987. The first commercial plant would consist of two streams each of 10, 000 tonnes per day coal intake, and producing in all 8500 tonnes per day (=60,000 b/d) of liquid products. The 1978 estimated cost of this plant was 81.6 billion (say 81.8 billion in 1980 dollars) and it could be in operation by 1996 (v ªLiquid Fuels from Coal: from R & D to an Industryº. L.E.Swabb Jr. Science 10 Feb. 78, pp 619±622). This capital cost ($30,000 per daily barrel of oil equivalent) is significantly less than the $40,000±$55,000 per daily barrel given in Table 8. In a more recent article by the same author (L.E.Swabb and G.K.Vick to the National Research Council Planning Conference on Synthetic Fuels, Washington DC, October 1979), a more optimistic but still cautious time basis is given. ªIf the decision were made to build a commercial coal liquids or coal gas plant in 1979, the plant could probably not be brought onstream until about 1985 at the earliest if it were to use existing, fully demonstrated technology, nor until 1987
THE WATT COMMITTEE ON ENERGY
37
or 1988 if it were to use technology now in the large pilot plant phase of development.º These lead times agree in general with those given in Table 8. c) Pyrolysis. In this process, coal is thermally decomposed in the absence of air, resulting in the formation of gaseous and liquid products and a char residue. No hydrogen is added to the system but the required hydrogen/carbon ratio in the gaseous and liquid fraction is obtained by removal of carbon in the char. Yields of liquids and gases are therefore less than in hydrogenation process, but the process might be competitive if all the products can be utilised, e.g. by gasifying the char. The variety of coal liquefaction processed indicated above gives rise to a wide range of production costs which are also heavily dependent on plant location and coal source. The cheapest coal syncrude, 835±850 per barrel, is considered to be that produced in the USA from indigenous coal, whereas the cost rises to $45±$65 per barrel for NW Europe production from imported coal. According to Shell (1979), US coal is valued at $3 to $5 per barrel oil equivalent, and can be imported to NW Europe at 88 to 810 compared with the cost of indigenous NW Europe coal at $10±$15 (all figures per barrel oil equivalent). 4.5 Enhanced oil recovery (Shell Briefing Service, March 79) It was indicated earlier that the world resources of potentially recoverable or ultimate reserves of oil were in the range 1500± 2000 billion barrels. Other estimates state that the amount of oil originally contained by the reservoirs discovered to date is around 4000 billion barrels. Total world cumulative production to date is 400 billion barrels or 10% of the oil in place, and the expected total recovery by primary methods (i.e. natural oil flow) would be about 22% of the oil in place. Secondary recovery methods, i.e. pumping of water or gas into the formation through separate injection wells, to complement the naturally occurring drive processes, would be expected to yield a further 10%, of which 4% is considered proven by current or firmly planned projects. Tertiary or enhanced recovery techniques aim at higher displacement efficiencies still by the use of: a) chemicals, e.g. surfactants to reduce surface tension and so improve the sweeping effect of water, or polymers to increase water viscosity and again thereby improve sweep efficiency; b) heat, introduced by injection of hot water or steam, or by underground combustion. These processes are particularly applicable to the more viscous crudes, and are the only practicable methods yet known of in-situ recovery of heavy oils from tar sands. The additional crude oil that could be made available worldwide by enhanced recovery techniques could be as much as 400 billion barrels, i.e. equal to the total cumulative production to date. Even so, more than half the world's oil resources, excluding the tar sands and heavy oils, would still remain unproduced. Extending crude oil reserves by enhanced recovery is a technology that is capable of commercial realisation in advance of syncrude production from coal, although according to the Shell Briefing Service, it cannot be expected to give any significant production on a worldwide scale in the next 10±15 years. However, under the stress of rising world oil prices, these prospects could be improved. The US Office of Technology Assessment has recently issued the following estimates of additional reserves that could be made available in the USA by employing enhanced oil recovery techniques: at world prices of $13.75 per barrel, (in 1976 dollars=$18.6 in 1980 dollars), an added 11±29 billion barrels with production ranging from 0.5±1.0 million b/d in 1985 and 0.7±1.7 million b/d in 1990; at the price at which synthetic oil or other alternative sources might become available ($22 per barrel in 1976=$30 in 1980), added reserves of 25±42 billion barrels, with production of 0.9±1.3 million b/d in 1985 and 1.8± 2.8 million b/d in 1990 Petroleum Economist, June 1979, p 259). These figures compare with US proven reserves of 30 billion barrels, a production rate of 8.2 million b/d and a consumption rate of about 18 million b/d (SBS p 3, BP Statistical Review 1979), i.e. a very worthwhile undertaking. 4.6 General comment on syncrude etc. production costs At the present stage in the development of a commercial syncrude production capability, it is difficult to gauge how far increase in world oil prices based on OPEC's marker crude wil l affect syncrude production costs. All components of syncrude production costs, i.e. extraction of the raw materials, capital costs of plant, energy, labour and other costs of operation, must be affected to some extent by world oil prices, but not in direct proportion. It would appear from Table 8 that a unified OPEC price structure of around $30 to $50 per barrel would open up in economic terms the very large
38
ENERGY COSTS REPORT
additional oil resources of heavy oils, tar sands, oil shales and hydrocarbon liquids from coal, and the employment of enhanced oil recovery techniques for conventional oil production. However, the problems of massive investments and long lead times are still with us. In theory, the capability of producing syncrudes or alternative energy sources on a sufficiently large scale is the controlling factor on the ultimate limit to which OPEC's marker crude price c an aspire; but the achievement of such a capability is another matter. In the USA, in spite of real progress made to conserve energy and reduce dependence on oil imports (Pet Economist July 1980 p 279), the need for progress in syncrude production is particularly pressing. A recent article by C.C.Garvin Jnr, Chairman of Exxon (ªLAMPº, Exxon House Journal 5 Dec 79), discusses the task of creating an American syncrude industry of 15 million b/d to meet their forecast oil and gas shortage early in the next century, over and above the effects of energy conservation and increased direct use of coal. The known recoverable reserves of shale oil and coal in the USA would supply enough syncrude at 18 million b/d to last 50 years (more oil than in the OPEC countries). 300 plants each of 50,000 b/d capacity would be required, each costing $3 to 4 billion `as spent' dollars (i.e. dollars which include the effects of inflation), and it would probably take 30 years to build and put all these into operation. The total cost of $700 billion in 1979 dollars, though huge, is considered to be manageable, being comparable to the share of GNP spent by the oil industry in recent years on conventional oil and gas exploration and production. In addition to the time and money, probably 2 or 3 barrels of water would be required for each barrel of synfuel. Since most of the concentrated oil and shale reserves are located in the semi-arid regions of Colorado and Wyoming, transfers of water from watersheds outside these states would be necessary, requiring ªgoodwill and collaboration among federal state and local government and private partiesº. The paper discusses the other environmental problems and the infrastructure necessary for such gigantic schemes, and pleads for decisions now to assure a new energy future. 5. Conclusions a) From the foregoing discussion, it is clear that an increasing disparity has developed during the last few years between production costs of the bulk of conventional crude now being produced and its selling price as forced upwards by OPEC. This has had obvious beneficial effects on OPEC member statesÐtheir revenues will increase to over 300 billion dollars this year, with unspent balances expected to reach some $140 billion (Pet. Econ. July 80 p 178). However, the UK has benefited also, since the price commanded by 36 API North Sea crude has now risen to over $36 per barrel in line with world prices, and so rendered economic the development of some of the smaller fields. This means that, given access to adequate exploration acreage and the necessary fiscal incentives, sufficient capital is likely to be attracted to North Sea development to meet expected UK oil demand through to the 1990s and perhaps beyond. b) The effect of OPEC curtailment of production means that the proportion of `low cost' and `medium cost' oil from nonOPEC sources on world markets has become less in advance of the time expected, so that an increasing proportion of high-cost hydrocarbons from the more expensive sources must take the place of OPEC oil as soon as technology and funds are availableÐbut in any case we must anticipate lead times of 10 to 20 years. The massive efforts of the USA to establish a syncrude industry, if realised, are unlikely to be of direct benefit to us in the UK, since it is evident they will be hard pressed to meet their own requirements. c) The combined effects of growing world energy demands, especially by the developing nations of the Third World, the recent and likely continuing actions by OPEC, the enormous investments to produce hydrocarbons from alternative sources, and the long lead times and strong environmental opposition to be overcome, mean that although demands will be met the cost of producing the necessary hydrocarbons is likely to lead to a doubling or trebling of oil prices (in today's money values) by the end of the century. d) Because of all these factors, added incentive is given to the many ways in which energy can be conserved, and particularly oil, e.g. by practising substitution wherever possible and converting petroleum to premium fractions, but these are beyond the scope of the present contribution.
Part 2: Energy and electronics: an introduction
CONTRIBUTORS: C.W.Banyard, * BOC Limited A.J.Findlay, Cybernetics Department, University of Reading Dr G.R.Whitfield, Cybernetics Department, University of Reading Dr R.Burford, Software Sciences Ltd J.Schmidt, Linde AG Professor A.L.Fawe, Montefiore Institute, Li ge University
*Executive member of The Watt Committee on Energy
The influence of electronics
C.W.Banyard,* BOC Limited Institute of Cost & Management Accountants
* Executive member of The Watt Committee on Energy
The influence of electronics
Perhaps the most all pervading influence of electronics is evident in the case there is for a readjustment in both personal and scientific values. Many of the values held today are an integral part of what we have chosen to term the ªIndustrial Revolutionº. The revolution has been at its most dramatic in the power it has given to create physical strength. This power may be conceived as complementing the strength of a man's arm with a range of devices from the power drill for ªdo it yourselfº, through a whole range of industrial tools to the fascinating lifting capabilities such as are used in modern shipyards. Equally it may be conceived as the strength inherent in creating high pressures and the equipment capable of their containment. There are many other aspects. Yet the resolution has been incomplete and might be termed a ªhalf revolutionº, with the other half being the present development of electronics. This is justifiably considered as ªthe other halfº in that with all the intellectual feats of the last two hundred years, there has thus far not been available a gigantic complementing of the mind of man such as has been achieved in relation to his physical capabilities. The very nature of the ªfirst halfº has meant that development and utilisation of energy resources has been a mainspring from which so much of the progress made has been derived. In reflecting upon the progressive use of energy it is striking to note the great increases in efficiency that have characteristically been applied. An early example is that of the efficiency of the reciprocating steam engine, and Table 1 shows the hard won progress from the days of James Watt, over the following period of what might be called initial development. Table 1 Efficiency of reciprocating steam engines Year
Million ft/lb per bushel coal
1775 Watt 24.0 1792 Watt 39.0 1816 Woolf compound 68.0 1828 Cornish engine 104.0 1834 Cornish improved 149.0 1878 Corliss compound 150.0 1906 Triple expansion 203.0 Source: History of Technology, Vol. IV—The Industrial Revolution, Clarendon Press, 1958
% Thermal efficiency 2.7 4.5 7.5 12.0 17.0 17.2 23.0
Since those days the continuing escalating rate of progress has given endless examples of development and refinement such that the efficiency with which energy has been utilised has vastly increased. Quite apart from the intervention of electronics, this process still continues as in the ªenergy conservation programmesº being applied in many countries of the Western world. There are many facets to the ªfirst-half revolutionº which defy classification as the direct production of power even when simply contemplating efficient utilisation of the primary sources of energy. One such facet is the power of chemical processes based upon indigenous fuels as a feedstock. To emphasise the variety of ways in which increased efficiency can arise in such processes it is appropriate to take the example of ammonia production. At a meeting on ªEnergy Efficiency in Agricultural Systemsº, 1 the above Figure 1 was produced, which demonstrates achievements through combining changes in feedstock as they became available with ever larger plants in which the later generations incorporated centrifugal compressors and advanced heat recovery systems. Given this high efficiency in the production of ammonia, the paper is able to conclude: ªOnly 1% of the nation's total energy consumption is used for fertiliser production yet it has been estimated that this 1% is responsible for generating 35% of the nation's crop output , by harnessing extra solar energyº. In a word, over a very wide range of activities the major improvements in efficiency that might be contemplated have already been achieved, or are achievable before the new power of the ªsecond-half industrial revolutionº comes to be applied
42
THE INFLUENCE OF ELECTRONICS
Figure 1 Ammonia production: energy and efficiency
through the ever more diverse impact of electronics. However, many of the main trains of thought must inevitably be a continuation of the past and be concerned primarily with the benefits of increased efficiency. This has the consequence that many of the earlier applications being seen in the electronics revolution are aimed at ªjam on the breadº by such improvements. It is the purpose of this paper to argue that as the ªsecond-half of the revolutionº gets under way then it will surely bring forward new and different expectations, promises and achievements as great and as far reaching as the last 200 years have seen but with the impetus that comes through springing from the broad industrial base provided by the earlier period. It seems unlikely that this new period will be energy dependent Table 2 Energy consumption related to material properties Material
Tensile Modulus of Fatigue Density kg/m3 Specific Total (kWh) energy per Meganewton unit of Strength MN/ Rigidity MN/ Strength MN/ Energy kWh/ strength m2 m2 m2 kg
Tensile Strength
Modulus of Rigidity
CAST IRON 400 Castings STEELS EN1 Low 360 alloy-free cutting bar EN24 1.5Ni/ 1000 Cu/0.25Mo bar Stainless 304 510 18Cu/8Ni sheet NON-FERROUS METALS Brass 60Cu/ 400 402n bar
Fatigue Strength 45000
105
7300
16.0±100.0
292±1825
2.60±16.2
1112±6952
77000
193
7850
16.0
349
1.63
651
77000
495
7830
16.0
125
1.63
253
86000
250
7900
32.0
229
2.94
487
37300
140
8360
27.0
565
6.05
1612
THE WATT COMMITTEE ON ENERGY
Material
Tensile Modulus of Fatigue Density kg/m3 Specific Total (kWh) energy per Meganewton unit of Strength MN/ Rigidity MN/ Strength MN/ Energy kWh/ strength m2 m2 m2 kg
Tensile Strength
Modulus of Rigidity
Fatigue Strength
Aluminium Duralumin Magnesium alloy bar Titanium alloys 6A1/ 4V bar PLASTICS Propathene GWM 22 Polythene L.D.XRM 21 Rigidex 2000 Nylon 66 A 100 PVC (R) REINFORCE D CONCRETE TIMBER Hardwood Softwood VITRIFIED CLAY GLASS
300 500 190
26000 26000 17500
90 180 95
2700 2700 1700
79.0±83.0 79.0±83.0 115.0
711±747 427±449 1029
8.2±8.6 8.2±8.6 11.17
2370±2490 1185±1245 2058
960
45000
450
4420
155.0
715
15.2
1520
35
1500
7.5
906
20.0±40.0
517±1034
12±24
2400±4800
13
84
3.25
920
15.0±30.0
1062±2124
165.0±330
4246±8492
30 86
1380 2850
4 20
950 1140
15.0±30.0 50
475±950 663
10.3±21 20
3563±7126 3400
50 38
1680 10000
12.5 23
1400 2400
20.0±50.0 2.3±4.0
560±840 145±253
17.0±25 0.55±0.96
2240±3360 240±417
14
4500
6
720
0.5
26
0.08
60
5 26
2000 21400
3 ±
550 2330
0.5 1.89
55 169
0.14 0.21
92 ±
100
30000
±
2500
330
83
0.28
±
43
in the sense that has been a feature of the past, for low energy use is a feature of electronics but where it combines with physical power, additional energy requirements to stretch beyond further horizons seem inevitable. There is another side to the past experiences of the ªfirst-half revolutionº, and this is the extent to which conflict has been the trigger for endeavour and achievement. There are at present few signs that the significance of conflict will diminish, but to look with optimism at the future it is necessary to believe that human society does have the alternate capability to put aside the escalating terror of ever more all embracing conflicts as a trigger to technological advance. It may be that the impetus which is so essential to the progress of technology can be given in this electronics age through the need for more effective energy utilisation. In this way the impending energy shortages may prove an outstanding benefit in the longer term! As the new technology grows, future projects might well be as astoundingly different to our present experience as the speculative wonderings of the current rash of science fiction, albeit like none of them! Meanwhile, it is not only in the utilisation of energy and the more effective use of primary energy sources that there can be seen a visible need to combine effectiveness with efficiency. An earlier Watt Committee Report2 drew attention to the benefits to be derived by considering the energy implications of various materials both in their initial production from the naturally occurring ores and their relative merits in terms of the life and maintenance of the resulting products. Extracts are given in Tables 2 and 3. These demonstrate the variety of solutions available to the designer when considering the production of a given structure. Bearing in mind the vast range of other considerations relating to design it can be said with confidence that energy features have not been optimised by designers. The limitation to design development, prior to computers, has been the number of interactive variables that could be handled by those concerned. Hence, improvement was inevitably slow and experience of products in use was frequently the painful way of progressing to further development with no realistic alternative method of enhancing and improving existing designs. Great strides have been made by using computers for design work so as to handle large numbers of variables, and there is a substantial history of the use of linear programming and other techniques to achieve appropriate models. While it is still probably true that these techniques have not been used to include many of what the designers will have seen as more marginal factors in these computations (such as energy
44
THE INFLUENCE OF ELECTRONICS
Table 3 Cost related to material properties Fatigue Strength MN/ m2
Density kg/m3 Cost £/tonne Cost (£) per Meganewton unit of strength
4500
105
7300
135
2.5
0.03
9.4
77000
193
7850
180
3.9
0.02
7.3
77000
495
7830
240
1.9
0.02
3.8
86000
250
7900
1500
23.2
0.14
4.7
37300
140
8360
1000
20.9
0.23
59.7
26000
90
2700
1000
9.0
0.1
38
26000
180
2700
2000
11
0.21
30
17500
95
1700
3700
32
0.35
66
45000
310
4420
11000
50
1.1
157
1500
7.5
906
560
14.5
0.34
68
±
3.25
920
535
38
±
151
1380 2850
4 20
950 1140
515 1705
16 23
0.35 0.68
122 97
1680 10000
12.5 23
1400 2400
445 20
12.5 1.26
0.37 0.005
50 2.1
4500
6
670
400
19
0.06
45
2000 21400
3 ±
550 2330
250 65
27 5.75
0.07 0.007
46 ±
30000
±
2500
130
3.25
0.011
±
Material
Tensile Strength MN/ m2
Modulus of Rigidity MN/ m2
Tensile Strength
Modulus of Rigidity
Fatigue Strength
CAST IRON 400 Castings STEELS EN1 Low alloy 360 Ðfree cutting bar EN24 1.5Ni/ 1000 1Cu/0. 25Mobar Stainless 304 510 18Cu/8Ni sheet NON-FERROUS METALS Brass 60Cu/ 400 40Zn bar Aluminium 300 alloy sheet Duralumin 500 sheet Magnesium 190 alloy bar Titanium 6AI/ 960 4V bar PLASTICS Propathene 35 GWM 22 Polythene L.D 13 RM 21 Rigidex 2000 30 Nylon 66 A 86 100 PVC (R) 50 REINFORCE 38 D CONCRETE beam TIMBER 14 Hardwood Softwood 5 VITRIFIED 26 CLAY GLASS 100 COSTS at 1.6.1979
considerations), the ever reducing cost of simulation should lead to more comprehensive models. While reflecting upon the basic materials for industry it is appropriate to turn to the cost of energy, its pricing and cost/price forecasting. There have been various studies with the help of computer models and a good example is that produced for the EPRI in 1977,3 from which a partial energy network diagram, as shown in Figure 2, is taken. The total number of links in the full network is stated as 2,400 and this illustrates the impracticality of such studies, previous to the availability of computers. Equally, it leads to thoughts of the exciting possibilities as greater experience, lower cost, increasing speed, plug-in programmes, and ªe asy to useº interac tive terminal facilities combine.
THE WATT COMMITTEE ON ENERGY
45
Figure 2 Elements of the energy network
It is a small step from this contemplation of modelling for primary energy costs to consider the use of modelling in the exploration of energy conservation in building. The United Kingdom has climatic conditions which result in some 60% of the total energy used being required for the wide variety of buildings made necessary for the storage of goods, the carrying out of industrial and commercial activities, and for general living. Among this variety of uses schools, colleges, and universities are particularly relevant to this contemplation of the ªhigh speedº second phase of the industrial revolution. The implications of change for education and training will be dealt with further, but first attention must be drawn to the considerable work that has been done in reviewing the various structural aspects of educational buildings. The diagram reproduced below (Figure 3)4 deals with insulation levels in a computer model of a typical primary school. This work was published in 1977 and shows dramatically the change from the ªprevious insulation norm for schoolsº and the ªguideli neº insulation be ing presented at that time. Turning then from the subject of modelling to the implications of change for educational training, there is a new relevance to the mental stature and emotional maturity that has to be achieved among any population aspiring to be among the leaders in world progress. In my view: 1) It is now essential that schooling should relate closely to the technical facts of the communities developed life-style, and should relate to the industrial, commercial and Government or community work that the children will later undertake. 2) Training should relate closely to immediate work requirements, but should be increasingly technically-based with more depth of technical explanation so that the inevitable retraining for new tasks in later life can commence from good foundations. The implications for United Kingdom education and training activities are profound. There would appear to be an urgent need for more intensive specific educational ground-work in basic scientific fields. Following this, a fundamental re-thinking of college and university courses would seem appropriate to take account of the comprehensive support that will be derived from
46
THE INFLUENCE OF ELECTRONICS
Figure 3 Total life costs of model with 40% window: wall, related to fabric losses
computers and other electronic technologies, so that this important phase of individual development can bring out those characteristics which will be of most value to the individual and community in future years. That innovative and artistic abilities will most likely be at a premium should not be taken as implying that anything but an extensive scientific basis will serve the individual well in the complexities of the ªcomplete d revolutionº however much variety of discipline can then be grafted onto this base stock. It might well be that the adding of other disciplines will be most suitably the subject of adult education and that many subjects at present considered major at first degree level will disappear from that phase in the educational programme. The above observations are made wholly to emphasise the state of change and the need for thoughtful planning with the future firmly in mind, and certainly not to claim any prophetic ability. The rate of change in computer and communication technology in transport well illustrates the pattern of development. A following paper ªUse of Computers in Air Traffic Control with Particular Reference to Evaluation and Planningº takes one aspect of energy and electronics in aviation transport and it is interesting to note that an earlier Watt Committee Report included a paper on ªAir Transport Energy Requi rement to 2025º 5 in which appeared the above diagram (Figure 4). The magnitude of the volume change illustrated shows, by focussing on this single aspect, the tremendous change which is impacting upon avionics. If there is a point of doubt it must surely be as to whether the downward slope in the curve will flatten out to the extent depicted for post-1985. When the paper ªUse of Computers in Air Traffic Control with Particular Reference to Evaluation and Planningº is coupled with the following paper on ªElectroni c Controls for Refrigeration Plants with Particular Reference to Energy Savings and Supermarket Applicationsº, the all pervading interweaving of the futures of Energy and Electronics is appropriately symbolised. The fabric woven out of the developments in electronics and energy over the next 20 years can readily be seen as of the utmost importance in determining the answers to such adventurous questions for the future as to how quality of life, saving of resources for future generations, and peaceful technological advance can be achieved in a situation where there is an unprecedentedIy high world population growth and an unprecedented rate of technological change. As a final observation, the future must be dominated by the post-industrial revolution period when many of the above factors have worked their way through and ªthe latestº technology operates on a different plane. At this time one would expect to see ªexpert programmesº so developed as to be commonplace and enabling everyone who had been through a rational educational programme to operate as a professional expert in virtually any field, applications of ªcatastrophe theoryº or other systems for encapsulating catastrophic events or dis-continuities, and communications embracing an ever more complete range
THE WATT COMMITTEE ON ENERGY
47
Figure 4 Volume reduction of typical computer processor and memory store
of the senses rather than the video we are just beginning to experience. Perhaps by this time there will be a new definition of work, and almost certainly a rebalancing of this against education and training experiences. References 1 2 3 4 5
Dr M.D.Hancock ªEnergy Demands in Manufacturing Fertilisers and Their Energy Return Valuesº. Paper for UKF Fertilisers meeting: Oct. 1979 Professor W.O.Alexander ªTotal Energy Content and Costs of Some Significant Materials in Relation to their Properties and Availabilityº. Watt Comm ittee on Energy Report No. 6: ªEvalua tion of Energy Useº, Nov. 1979 SRI International, California ªFuel and Energy Price Forecasts: Quantities and Long Term Marginal Pricesº. Electrical Power Research Institute: Sept. 1977 Building Bulletin No. 55: Dept. of Education and Science ª Energy Conservation in Educational Buildingsº. HMSO: 1977 P.Robinson ªAir Transport Energy Requirement to 2025º. Watt Committee on Energy Report No. 7: ªTowards an Energy Policy for Transportº: April 1980
Fundamentals of computer systems
A.J.Findlay, Cybernetics Department, University of Reading Institution of Electrical Engineers
Figure 1 Computer components
Fundamentals of computer systems
Introduction This paper is intended as an introduction to computers and computer devices for those who are not familiar with them, and in particular with their applications to measurement and control. There are many possible ways to define a computer, but perhaps the most general is that a computer is a device that stores and manipulates patterns and communicates them with the `outside world'. The most familiar use of computers is as glorified calculators, and although the need to do calculations rapidly and accurately was one of the major driving forces behind their development, modern computers have more general applications. The early electronic computers, for example, were developed for code breaking in the last warclearly pattern matching rather than calculation. The arithmetic capabilities of computers have received much attention and have been developed to a very high level, but most computers these days spend only a minute proportion of their time doing arithmeticÐthe rest is taken up with text handling, communication, and translation of high level languages into `machine code'. During the 60s and early 70s, the computer business was divided roughly into two categoriesÐ`ma inframes' and `minis'. The mainframes, typified by the large IBM machines, were mostly for commercial and large scientific applications; the minis, typified by the DEC PDP range, were used for everything else, and were designed to be easy to connect to other equipment. More recently, the microprocessor has entered the market, and is now pushing the minis out of some of their traditional applications. The micro, by virtue of its very low cost, has created new computer applicationsÐthe `home computer' market is entirely due to the micro. Microprocessors are the main concern of this paper, because their low cost and great versatility make them ideal for use in measurement and control. Basics In the introduction, a computer was defined as a device that stores, manipulates, and communicates patterns. This leads to a structure as shown in Figure 1, with a store, a manipulator or `processor', and a communications unitÐgenerall y termed the `Input/Output' or `I/O' unit. Normally, all information concerned with the outside world passes through the processor, but it is sometimes more convenient to arrange for the I/O to work directly to and from the storeÐthis is much faster than passing data through the processor. This leads to a triangular structure, as shown in Figure 2. In small systems, it is often more convenient to connect all devices to a common `data highway' or `bus' (bus, as defined by Flanders and Swann, is from the Latin `omnibus' meaning to or from, by, with, or for everybodyÐa definition as applicable to computers as to public transport). The bus concept (Figure 3) leads to a modular approach to design (Figure 4) as many similar modules may be connected to make a system as large as may be required. This leads to easier design and maintenance, as each module can be considered in isolationÐits rel ationship with the bus is the same, no matter what other modules may be connected. Practical computers are essentially binary in natureÐtheir signals can have only two valid statesÐ`ON' or `OFF'. These states may be ascribed any convenient meaningÐ`TRUE' or `FALSE', 0 or 1. The information conveyed by such a signal is
50
FUNDAMENTALS OF COMPUTER SYSTEMS
Figure 2 ‘Triangular structure’
Figure 3 The bus concept
Figure 4 Modular design
termed a `binary digit' or `BIT', and although not very powerful individually, n bits may be combined to uniquely represent 2n states. Most common micros use groups of eight bits known as BYTES, and their store is arranged to hold information in this form. Each storage location is identifiable by its `address', which is another pattern of bits. Input/Output devices can be considered to be like store locations with wires connected to the individual store elementsÐif information is written into them, it appears as voltage levels on the wires of an output device, and if information is read from an input device, the bit pattern obtained reflects the state of the electrical inputs. The way in which the patterns of bits are used within the computer to represent information is entirely a matter for the programmer to decide, although some conventions do exist and it is best to follow these where they apply. One such convention applies to the use of binary patterns to represent numbers. Assuming an eight bit word, the bits are numbered from 0 to 7 with bit 0 being the least significant bit. Thus, two raised to the power of the bit number is the numerical `weight' of that bit: 1 for bit 0, 8 for bit 3, and so on. All micros provide instructions for the addition and subtraction of numbers conforming to this convention; some provide multiplication and division as well. Instructions are also provided to perform `logical' operations on bit patterns, the operations AND, OR, NOT, and EXCLUSIVE-OR being the usual set. Of these, the AND instruction is the most frequently used, as it has the property of `masking off' bits that are of no interest. If, for example,
THE WATT COMMITTEE ON ENERGY
51
a pattern of eight bits has been read from a peripheral and it is desired to perform some particular action if bit 1 is high, the instruction AND 00000010B could be used to clear all unwanted bits to zero while leaving bit 1 unchanged. The `Jump if not zero' instruction could then be used to start the appropriate action if bit 1 was set. By the use of the logical operators in combination, conditional operations of any desired complexity can be built up. The application of microprocessors to text handling is gaining in importance, so the mechanism of the process will be briefly described. The first step is obviously to represent the letters of the alphabet in a form suitable for handling by computer. There are many conventions for doing this, but the most widely accepted one is ASCIIÐ`American Standard Code for Information Interchange'. ASCII uses a pattern of 7 bits to represent the letters (both upper and lower case), the numerals, and a variety of symbols. `Control characters' are also defined, which do not print, but represent actions such as `new line'. The 7 bit ASCII code fits conveniently within the 8 bit word of the microprocessor, the unused upper bit usually being set to zero. Once the text or other alphanumeric data has been converted to a binary pattern, it can be processed in the same way as any other binary pattern. For instance, it is easy to write a program that searches through a string of characters to find the first occurrence of a given character. Some micros (e.g. the Z80) have a machine instruction for this purpose. From this base, it can be seen that a particular sequence of characters could be searched for and possibly replaced by a different sequence. Again, the latest micros (Z8000 in this case) are sometimes equipped with a machine instruction for this purpose. This search and replace procedure is the basis of text editors such as the very powerful ECCE editor, and of text formatters and other `word processing' systems. The draft of this paper was prepared using a version of ECCE implemented on the Z80 by the author, and a text formatter written in Z80 Pascal by a colleague. Devices Most of the actual devices used in microprocessor systems look very similar as they are all variations on the `silicon chip' theme. The size of the package bears little relation to the complexity of its contents, but is usually determined by the number of connections that must be made to it. Cost, also, is more a function of sales volume than of complexity, and in many devices it is now the cost of the package that dominates the overall price as the development cost can be written off over many millions of units. Brief descriptions of the more common components are given below to provide an indication of scale: Processors: These are usually 40 pin chips, costing about £10 and executing one instruction per microsecond. A wide variety is available, the most popular ones using 8 bit words, but 16 bit micros are now widely available and are used where greater processing power is needed. Store: Storage devices are basically of two typesÐthe `read only memories' (ROM) and the read/write or `random access' memories (RAM). The former type is used to hold permanent data and programs that must always be present, and the latter is used to hold transient data and information used by the processor in the course of running a program. RAM chips usually have 16 to 18 pins, store between 4K and 64K bits, and allow access to any data item within 300ns. The smaller (4K, 16K) devices cost about £1 per unit, and the 64K ones are about £20 but falling rapidly (mid-1981 prices). ROM chips are classified by the way the data is put into themÐfor very large quantities, it is physically printed onto the chips during manufacture (masked ROMs), for smaller quantities, the EPROM (Erasable Programmable Read Only Memory) has almost universal acceptance due to the ease with which the `permanent' data may be entered, and later erased for re-use by ultra violet light. ROMs are commonly available in 24 pin packages, storing between 1 K and 8K BYTE. They cost about £1.50 per K-byte. Input/Output devices: As described earlier, all I/O devices appear to the computer as store, but the way they communicate with the outside world varies considerably. The simplest type is the `parallel' device, which just has binary input and output lines; it is suitable for controlling lamps, relays, and other binary devices, and for sensing the state of switches and photodetectors. Another common type communicates data in a serial manner-eight bits are sent in sequence down a single line, and can be reconstructed into a byte at the other end. This is the normal form for communication with Visual Display Units and other computers, as it is easily processed for transmission down a normal telephone line. For some purposes it is useful to be able to handle analogue quantities, so analogue-to-digital converters and digital-to-analogue converters are available, usually dealing with voltage or current signals, but with the use of suitable transducers almost any form of signal can be handled.
52
FUNDAMENTALS OF COMPUTER SYSTEMS
There are myriad variations on the themes described above, the general principle being that if a quantity can be measured, it can be fed into a computer, and if it can be influenced, then a computer can be used to control it. It must be remembered, however, that a computer can only follow the instructions given to it, so it is not necessarily any better than any other form of control device. Miscellaneous hardware: The power supply unit, case, circuit boards, plugs and sockets required to make up a working computer system usually account for a very large proportion of the total cost, a figure of ten times the chip cost being not uncommon. Software However good the hardware of a computer system, it is no use without a program. This is, in principle at least, simply a sequence of instructions that define the behaviour of the computer in all circumstances. The concept of an ALGORITHM is of importance hereÐan algorithm is a COMPLETE set of instructions for a given task, and the concept is applicable to any task, whether carried out by a human, an animal, or a computer. If no algorithm can be formulated for a given task, it is unlikely that a computer will be able to perform that task reliably, so the most important aspect of programming is the formulation of algorithms. It is usually found that certain sequences of instructions are required in many places in a program. Computers provide facilities to allow these sequences to be written once only and invoked with different data whenever required. These sequences of instructions are termed variously: procedures, subroutines, functions, blocks, or processes according to the programming language in use. By the use of this block-structuring facility, software may be designed in a modular fashionany complex task can be broken down into several less complex sub-tasks. Each sub-task can in turn be broken down until such a level is reached that the programming of a task becomes very simple. The advantages of modular software are similar to those of modular hardwareÐgreater reliability, ease of modification and maintenance, and greater generality allowing some modules to be used in many different products. The above comments on the modular design of software are applicable to all programming languages, although some have been designed specially with this in mind. Chief among these are the high level languages known collectively as the Algols (Algorithmic Languages), which include Algol-60, Algol-68, Pascal, and their derivatives. These are all `compiled' languages Ðthey are not understood directly by the computer, but are converted or compiled into machine code by a special program known as a compiler. Compilers are often written in the language that they are to operate on, and do not necessarily run on the machine that they generate code for. This is particularly true of compilers for microprocessors, as large amounts of memory and disk storage are required to use them, but the code modules produced can be quite small. For example, one particular Pascal compiler for the Z80 micro-processor requires at least 56K bytes of store and a 250K byte floppy disk in order to function, but compiled programs can be as small as 1K byte and will run on dedicated single board computers. Another important category of languages are the `interpreted' ones, of which the best known is Basic. These languages tend to make modular design of software rather difficult because of the way they have to be designed. Programs written in such a language are never translated into a form that can be directly used by the computerÐthey are effectively used as data for a program known as an `interpreter', which examines each statement as it comes to it and takes the appropriate action. There is thus a large overhead in the operation of such programs, as the checking of the user's program for silly mistakes must be done every time a given statement is encountered, rather than once and for all before the program starts running. This means that interpreted programs tend to run much slower than the equivalent compiled ones. The main advantage of interpreters is that, because the compilation stage is absent, a program can be run immediately after it has been typed inÐa compiler could take several minutes to produce loadable code. The moral of this is that a compiler should be used if a program is to be run many times or if it must run very rapidly, but an interpreter should be used where many changes to the program are expected at frequent intervals. Applications It was mentioned in the section on Input/Output devices that almost anything could be monitored and/or controlled by computers. It is generally true that computer applications are limited only by the imagination, and computers should be considered as appropriate alternatives to most conventional control devices. The economics of computer use change almost daily, and it is necessary to keep up with the state of the art to know the most cost-effective approach to a given problem. A typical application in energy management is the peak demand controllerÐthe purpose of which is to monitor the use of electricity in a factory or office and to switch off low-priority loads such as space heating when a pre-set power limit is approached. This is not so much a conservation measure as a device to avoid the very heavy charges levied by the electricity
THE WATT COMMITTEE ON ENERGY
53
boards on the maximum level of demand in a given period. The same device could, however, be used to conserve energy by providing it with more data collection facilities and allowing it greater control over the activities of its owners. Thus, if a pattern of energy use could be established, it may be possible to re-schedule some energy-intensive operations to reduce the requirement for heating or cooling. Computers also find application in power generation and distributionÐthey are used as data loggers and controllers in power stations to monitor the generation process and maximise its efficiency, and also for monitoring and protection of the National Grid. An application currently generating considerable interest is that of controlling internal combustion engines. By measuring various engine characteristics such as speed, torque, exhaust gas temperature and composition, it is possible to optimise the combustion conditions to improve fuel economy. By this means, researchers hope to produce an engine for a small family car capable of more than 100 miles per gallon. The domestic applications of computers are increasing rapidly, with microprocessors now being found in washing machines, sewing machines and cookers. In all these devices, the computer is used to give greater flexibility of control than was previously possible, while reducing the number of component parts and so the cost and energy content of the equipment. The engineering of these applications is more difficult than might be imagined as the environment in most domestic equipment is very hostile to electronic equipment-electrical noise, vibration, and extremes of temperature and humidity must be allowed for. Computers have been gradually finding their way into manufacturing industry for some time, in the form of Numerically Controlled (NC) machine tools, but more recently computer-based stock control systems and robot handling devices have been appearing. The general trend is towards Computer Assisted Manufacturing (CAM) which makes use of all the available techniques of Computer Aided Design, NC tools, and automated warehousing to produce goods more efficiently. Progress in this direction is necessarily slow because of the enormous investment required to gain the maximum benefits. Systems Many of the applications described above are of single-chip computers, which have the processor, store, and I/O units all on one wafer of silicon in a single package. The cost of such a unit can be very small in large quantities (about £2/unit for 10000 +units) and there are some types now available which allow one-offs to be built for less than £50. The capability of such single-chip processors is limited by their small stores and by the number of pins available to use for input/output. Where greater capacity is required, single board computers and rack-based systems are used. In a rack-based system, each module shown in Figure 4 would be on a separate plug-in circuit board and they would be connected together by a `backplane' (a large circuit board holding a number of identical sockets connected together). Many such systems exist, most of which are designed for specific types of microprocessor, though some are more general. Examples of the latter are the S-100A bus which will handle any Intel-derived processor, and Cybus which is totally independent of processor type. For larger control and monitoring applications, mini-computers are the `conventional' solution, although networks of interconnected microprocessors are beginning to take over in this role. The largest systems are usually administrative in nature and are used to control companies and economic systems (sometimes not very effectively!). Specification of systems Few of the readers of this paper will be concerned with the design of microprocessor systems, though many will at some time design a system in which a microprocessor is a component. The distinction is subtle and often blurred, but it does give rise to a communication problem between the system designer and the designer of the component computer system, especially since the computer is likely to be in complete control of the finished product. In these circumstances it is the responsibility of the system designer to specify exactly what is wanted from the computer. The specification should, of course, be drawn up in consultation with the computer systems designer, who should point out what is reasonable and ensure that the specification is capable of being met. The specification will largely be concerned with the way the software is to behave, but cannot be expected to cover all situationsÐthe computer systems designer must try to ensure that the system behaves in the most reasonable and obvious manner possible even in unexpected circumstances. To a large extent, engineering is the art of obtaining the best possible compromise between the many conflicting requirements placed on a system. Computer engineering is no exceptionÐwith the added dimension of the software. The hardware design costs can be traded against the per-unit cost of the finished article, but it may be possible to drastically reduce the software costs by spending more on the hardware. The same principles apply to softwareÐexecution speed can be traded for storage requirements, and both can be traded against software design time. As an example, an assembler was recently written by the author for the Z8002 microprocessor. By writing the assembler in Z80 Pascal with little thought for efficiency, it was produced in about four weeks, but it occupies 36K bytes of memory, needs a further 10K-20K for its
54
FUNDAMENTALS OF COMPUTER SYSTEMS
working store, and processes about four lines of code per second. If on the other hand it had been written efficiently in assembler language, it would probably have required less than half the store, run five times as fast, and taken three months to write. It is therefore very important when designing systems incorporating computers to consider especially carefully the balance between design costs, manufacturing costs, and operating costs. The future The current trend in the computer business is towards smaller, faster, cheaper, smarter hardware BUT the cost of software is stationary or rising. The big development of the near future is in communications. There are many communication systems available, ranging from the simple serial link down a telephone line to the high-bandwidth optical fibre links now being installed by the Post Office. Long-distance communication will continue to be controlled by government agencies in most parts of the world, with developments like System-X taking an increasing amount of data traffic. For `local' communications the choice is wide and growingÐfor instruments there is the IEEE 488 bus, for fast communication between computers there are the Cambridge Ring, Ethernet, Decnet and a variety of others. None of these is of universal value, and each serves a specific purpose. Most computer users will probably make use of all of them in the course of their lives although they are unlikely to be aware of their existence until something goes wrong! Computers are not inherently electronicÐlight is a very good information-carrying medium and it can be amplified and switched by lasers. Biological systems are also of interest due to their excellent pattern-recognition abilities; current models are a little slow in other respects and suffer from certain maintenance problems, they also tend to go to sleep at inconvenient moments!
Electronics and energy
Dr G.R.Whitfield, Cybernetics Department, University of Reading Institution of Electronic and Radio Engineers
Electronics and energy
ªHear now a songÐa song of broken interlude sÐ A song of little cunning; of a singer nothing worth. Through the naked words and mean May ye see the truth between, As the singer knew and touched it in the ends of all the Earth!º Rudyard Kipling: A Song of the English 1. Electronics 1.1 Introduction Electronics is a big subject, far too big to cover in one lecture, so I shall not try to cover much, but to paint the picture with examples. In the first part, I shall deal with electronics, and in the second, with its applications to energy. Electronics is generally taken to cover any application of electricity, apart from electric power generation and distribution systems, that needs more complex devices than simple switches, batteries and lamps. The key component that makes a circuit electronic is the active device, such as the valve or transistor, which can amplify a signal, i.e. it can be used to make the output of a circuit larger than its input. Electronics started with the development of the thermionic valve by DeForest in 1907, and grew initially as a means of radio communication, later diversifying into other fields. But valves were always large and unreliable, and consumed a lot of power, which limited their application. The invention of the transistor by Bardeen and Brittain in 1948, and its subsequent development, have made possible applications undreamt of in the early days of valves. Major applications at present are radio and its offshoots, television, radar and radio navigation; telecommunications; measuring systems of many types; automatic control systems; and electronic computers. These all take advantage of the major strength of electronics; the only things that move are electrons, whose mass is so low that they can be accelerated much more quickly than any mechanical system. It is usually easy to make the electronic parts of a system so fast that its time delays can be neglected. Most electronic systems suffer from the major disadvantage that their inputs and outputs are not electrical quantities, but something else like the tempera ture of a solar collector, the strength of the wind, or the sound of a voice; so nearly all electronic systems require input and output transducers to connect them to the outside world. These transducers are often the most troublesome and costly parts of an electronic system. Most books on electronics deal first with devices, then with circuits using them and finally perhaps with electronic systems and concepts. I shall do so too, but with reluctance, because the most important parts of the subject are the ideas and the system design; architecture is much more important than the choice of a brick. But it is difficult to design a system without a good understanding of the available parts; many architects design awful buildings due to lack of understanding of materials and their properties. The situation is particularly difficult in electronics because, while most technologies advance, electronics gallops. New devices appear faster than one can assimilate them, so one's detailed knowledge is always fragmentary and out of date. Many of the devices that will be important to our energy future have not been thought of yet; you must forgive me for not describing them.
THE WATT COMMITTEE ON ENERGY
57
Figure 1a The 2150 integrated circuit, about 5mm square, is a complete computer. It contains about 14000 transistors
1.2 Devices The key device in modern electronics is the silicon planar transistor. Not just because it is used in enormous numbers (which it is) but because the techniques used to make it have been extended to make a wide range of newer and more advanced devicesÐdiodes, solar cells, integrated circuits, microprocessors, all the ªsilicon chipsº that people worry about. Suffice to say that the transistor is very small, and can be made in dozens on the surface of a slice of silicon by techniques that are similar to the printing of a sheet of multicoloured postage stamps; repetitive processes are applied to the whole slice, and the detailed structure of the transistor is determined by a series of photographic negatives. A new set of negatives gives a different transistor. To make individual transistors, the ªstampsº are separated, and each is mounted separately in a suitable package with three leads; the package is made large enough to be easily handled. These transistors can then be used in circuits to amplify radio signals, to control the speed of electric motors, or whatever is required. They can handle signals up to at least 1000 Megahertz, switch in a few nanoseconds, and handle powers of hundreds of watts. But if, instead of separating the transistors, they are connected together on the slice of silicon, a complete circuit can be built up on the chipÐan integrated circuit. This requires a few more stages of processing, but much less mounting of transistors; a modern integrated circuit (Figure 1), containing 14,000 transistors, needs only one case with 40 connections to the outside world. So integrated circuits are absurdly cheap to manufacture, considering the complexity of the circuits on them. But there is a large overhead cost for the initial design and manufacture of the photographic masks, so integrated circuits are only made when the manufacturer expects to sell large numbers of each type. Integrated circuits are made for use as general purpose amplifiers, for specific applications like the temperature control of ovens, and for major parts of radio receivers. But the largest and most complex integrated circuits are used in digital systems, ranging from simple logic circuits up to a complete computer on one chip. Other devices are less complex, but equally interesting. Solar cells are just electronic diodes, of large area, on the surface of a slice of silicon. The structure causes a built-in voltage of about 0.5 V, and when light falls on the cell, each photon frees one electron. So a current can be drawn, proportional to the intensity of the light. The process is remarkably efficient, and about 10 or 15% of the energy of ordinary sunlight can be converted into electricity. Similar but smaller cells, called photodiodes, can be used to detect light signals. Other devices, light-emitting diodes and solid state lasers, can emit light. So communication links can be set up with beams of light, using digital circuits to drive the light source, and driving more digital circuits at the receiving end from the photodiode. The light beam need not be in the open air; it can be confined by multiple reflection in a fine glass rod or fibre. Such glass fibres are much thinner than conventional telephone cables, but their capacity is far greater. Using optical fibres, in the existing ducts, the capacity of the telephone system can be increased thousands of times (Figure 2).
58
ELECTRONICS AND ENERGY
Figure 1b The slice of silicon on which the 2150 chips are made
New devices are being developed all the time. Usually they are merely small improvements on existing devices, raising operating speed, power handling capacity, or economy. But every few years a major step forward is made, allowing new applications of electronics that were previously impossible or hopelessly uneconomic. Some recent developments in this class have been high power field effect transistors, charge coupled devices, and, of course, the microprocessor (remember the days when we all used slide rules?). It is about time for a replacement for the silicon planar transistor, which is 20 years old, but it is hard to see what it might be. Integrated circuits are now so small that the size of a piece of equipment is determined by the size of the operator's fingers; they are so reliable that one can usually ignore the possibility of failure; and they are so fast that there is little to gain from further increase of speed. We have a long way to go before we have fully exploited current devices and their obvious developments. 1.3 Ideas Electronics is full of concepts and ideas, many of them borrowed from older sciences, but now developed and incorporated. The idea of noise, caused by the thermal motion of atoms and electrons, is derived from thermodynamics. Noise sets fundamental limits to the accuracy with which measurements can be made, signals can be detected, and information can be transmitted. But, with a few rare exceptions, it is only in electronics that these limitations can be approached. So, the theory of noise and signal detection has been developed as part of electronics. The same goes for the theory of information, which shows that information can be defined quantitatively, together with the capacities of a store and a communication channel, and sets limits to the rate at which information can be transmitted down a noisy channel. Another fundamental idea is that of feedback. The idea is very simple (Figure 3). Suppose an amplifier, driven by a signal Si, gives an output So. Subtract some fraction, say one third, of So from Si to give an error signal (Si S o/3), and drive the amplifier with this error signal. Since So is finite, the error can be made as small as we like by making the gain of the amplifier sufficiently large. Then to a good approximation, the gain of the ªfeedback amplifierº , So/Si, must be 3, and it does not matter much what the gain of the amplifier happens to be. This principle is widely applied in electronics, and applies equally well to servo systems, such as the power steering gear of ships, and to automatic control systems such as thermostats and pressure controllers. But it also applies to any other system in which the ªdeviceº which controls the output takes part of its input from that output: such as democratic government, where the government's performance influences the electorate which ultimately chooses the government: or an economy, where the price of goods influences the demand, which affects the price.
THE WATT COMMITTEE ON ENERGY
59
Figure 2 Comparison of an optical fibre cable with 4800 pair copper and 18 core coaxial telephone cables. The capacity of the optical cable is greater than either of the other cables
Figure 3 A feedback amplifier
Electronics can give us leads to the understanding of such systems, and tools for analysing and modelling them. In particular, there are clearly defined conditions to be satisfied if feedback systems are to be stable. It comes as no surprise to the engineer that complex systems like the economy, designed without regard to feedback theory, turn out to be unstable. 1.4 System design System design is not confined to electronics, but it is certainly an important and fundamental part of the subject. It is not easy, and is often overlooked. There is much to be said for the old army approach of the ªappreciat ionº prepared by a junior officer for his C.O. about a proposed operation; it goes
60
ELECTRONICS AND ENERGY
something like this:± Object of the exercise ..... Information about the enemy ..... Information about own troops ..... Courses open to the enemy ..... Courses open to own forces ..... Conclusions ..... By going conscientiously through such a list of headings, he can avoid leaving anything out. In any design exercise one has the same sort of problem. One must first define the object of the design in a quantitative specification. It is important to pick out the real object, or serious mistakes can be made. For example, during the second world war, when many ships were being sunk by enemy aircraft, a programme was started to fit anti-aircraft guns to merchant ships. After a while, someone noticed that these guns were not shooting down any aircraft, and suggested that the whole programme was a waste of effort and should be cancelled; but then someone else looked at the statistics and found that very few ships with guns were being sunk by aircraftÐt he guns were in fact doing their real job, of protecting ships, very well. In electronics, the object specification will certainty include quantitative estimates of the required performance, limitations on size, weight and power consumption, and details of the environment in which the equipment must operate; some of these may not be explicitly stated by the intending user, but the system designer needs to consider them whether they are stated or not. Other things to consider are reliability, the nature and consequences of failure, ease of repair, cost of design, of manufacture, of use, and of repair, and the time available for development. It is important to check as early as possible that the specification is possible, i.e. that it does not conflict with any of the laws of science, and that it does not require anything that is technologically absurd. Having specified the equipment as far as possible, one has to invent systems that might meet the specification. At this point the search should be very wide; it is fatally easy just to do what someone else did last time, and electronics is advancing very fast. The right choice last time may well be the wrong choice now. A few key questions may help:± 1) Is the system (at each point) handling information or power? It is wasteful to use large currents and voltages merely to convey information, and hopeless to try to run motors from minute signals. 2) Is the information analogue or digital? Should it be changed from one to the other, bearing in mind that most inputs and many outputs are analogue, but that digital integrated circuits are so cheap that any complex signal processing may be much easier if the system is digital? If changes are necessary, are they in the right place? 3) How is the system to be divided into separate units? The divisions used last time may no longer be the best. How is the specification to be divided between the units? Which of the units are trivial, in the sense that they are well understood and just need making, and which need careful design or even research? These are the ones which should be assessed first, because if they cannot be made, the whole system will fail. Be careful not to load all the difficult parts of the specification onto one unit, particularly if it is one you don't know much about, and i ntend to have designed by someone else. 4) Should the units (or techniques, or computer programs) be standard or specially designed for the system? Standard units have the advantages of known reliability, ready availability and low cost; but special units can fit the job better. 5) Is electronics the right technology anyway? Other possibilities include mechanical systems, hydraulics, pneumatics or the use of people or animals. Would a combined system be better still? By this time, the designer is, hopefully, left with a small number of possible systems. It is now worth while to do a small amount of more detailed design work, and possibly a little research and development, to choose between them. 1.5 Example of system design As an example, consider the design of a ªheat meterº to measure the energy saved by a domestic solar collector. The basic heating system is shown in Figure 4. The incoming cold water is heated by a solar collector and stored in a solar tank, which feeds the ordinary domestic hot water tank. The hot tank is maintained at 60 C by an elect ric immersion heater. The heat supplied to the hot tank from the solar system can be found by measuring the temperature of the water in the cold tank (Tc) and in the feed to the hot tank (Th) and multiplying the difference by the mass (Q) of water passed. More accurately,
THE WATT COMMITTEE ON ENERGY
61
Figure 4 General arrangement of a solar water heating system
where c is the specific heat capacity of water. So the system design starts with two temperature sensors and a flow meter. The ideal output is a mechanical digital display (so that the reading cannot be lost by momentary loss of electric power). Accuracy should be 1 or 2%, and cost is important. Clearly one could obtain the required performance with two electronic thermometers (£150), a flowmeter (£50), a data logger (£500) and computer processing (£500). But this is hopelessly complex and expensive, so we have to think. The difficult operations are multiplication and integration. Both can be done by analogue devices, but multipliers are difficult to make, and analogue integrators drift. So the integration should be digital, which suggests the use of an electro-mechanical counter for the output display. But the inputs are analogue, so where should the change to digits be made? The temperature difference can easily be taken by analogue circuitry (using two temperature sensors and a differential amplifier) and converted to digital form in a standard D to A converter. The flow measurement is more difficult; it can be done by measuring the pressure drop across a constriction, or by driving some sort of turbine. If the turbine carries a magnet on one blade this can be detected and an output pulse can be obtained for each rotation of the turbine, and hence for a known (small) quantity of water. If the digital output of the thermometer is added to the running total every time the turbine produces a pulse, the multiplication and integration take place automatically. It happens that there is an A to D converter integrated circuit that operates by counting a clock waveform, and makes a train of pulses whose number is equal to the digital output. This chip allows the multiplication to be reduced to a simple process of counting. The final system is shown in Figure 5. The points to notice about this design are that all the pieces fit together like a jig-saw puzzle. The design depends on the availability of a suitable flowmeter, A to D converter, and counter, and on changing from analogue to digital information at the right point. The other thing to notice is that I have left out much of the hard workÐthe thinking about what is available (ªInformation about own troopsº), the sketched systems that have been rejected, and so on. All this is usually left out of reports, giving the impression that design is a straightforward process; I believe this is one reason why so many people try to skip the design process and go straight into detail. 2. The use of electronics in the energy industries ªMy name is Legion: for we are many.º St. Mark V. 9 2.1 Conventional sources of energy The uses of electronics in the energy industries are numerous and mostly obvious. Search and exploration make heavy use of electronic instrumentation for the basic geological research; detecting seismic waves from shot firing or measuring gravity or magnetic field anomalies. Units range from small hand-held short range instruments to remote sensing instruments carried in ships, aircraft, or satellites. When the source of energy has been found it must be evaluated, developed, and exploited. All
62
ELECTRONICS AND ENERGY
Figure 5 Block diagram of heat meter
these stages depend heavily on the use of electronic equipment for navigation, telecommunications, instrumentation, recording and control. In many cases the use of advanced electronic techniques can give major savings in conventional engineering fields or major improvements in the safety of the people operating the systems. For example, in deep water it is difficult to build a structure large enough to stand reliably on the sea bed. What is really required is a platform on the surface that remains in a defined position. It is possible to hold a freely moving structure in a defined position provided that it has suitable engines and the navigation system that defines the position is sufficiently accurate. Automatic positioning systems using electronic sensing of specially placed beacons can be used to hold a platform or drilling ship within a few metres of its desired position under most weather conditions. Underwater inspection, maintenance and construction work previously undertaken by divers can frequently be carried out more conveniently by the use of robots, submersible devices equipped with electronic sensors, television cameras and operating ªhandsº. Apart from the obvious advantage that such devices can be designed to suit the environment in which they are going to be used, they can also be made much smaller than human operators for inspection, for example, of the insides of pipelines; or they can be made much stronger than human operators, so that they can handle large objects and heavy weights. There is a large field for the application of similar devices in coal mines where they could operate in conditions which were too dangerous, or in seams which were too narrow, for human miners. Their use is of course essential in the maintenance of the cores of nuclear reactors. The later, downstream, stages of the energy industries, transport, oil refineries, coal processing plants and so on, also depend heavily on electronics, from large process control and communication systems right down to such common devices as electronic petrol pumps and weighbridges. And we should not forget the organisation and administration, which depends increasingly on electronic data storage and communication. (Indeed this paper is being written on a word processor). But in many respects these devices are just doing by means of electronics what would in the past have been done by other means. More interesting is the application of electronics to unconventional energy sources. 2.2 Unconventional (renewable) energy sources The major source of unconventional energy is the sun, and a major contribution of electronics to the use of solar energy is the solar cell. The solar cell, being a solid state device with no moving parts, is very reliable and easy to use. It converts sunlight into electricity with an efficiency between 10 and 15%. In full sunlight, an individual cell 5 centimetres in diameter can generate a current of about half an amp at a voltage of half a volt. Large numbers of cells can be connected together in arrays to give any required current and voltage (Figure 6). It is usually convenient to use the cells to charge conventional batteries which can then provide a continuous source of energy, because of course solar cells only operate when light is falling on them. At present the cost of solar cells is around £10 per peak watt, which is about 10 times the cost of a conventional power
THE WATT COMMITTEE ON ENERGY
63
Figure 6 Typical solar cell installation, supplying power to an irrigation pump in Cairo
station. But they can run unattended and are already finding numerous applications where relatively small amounts of power are required in awkward situations. For example, they power navigational lights on buoys, light signals on railways, and repeaters in telecommunication systems in remote areas such as Central Australia and the forests of New Guinea. Experimentally, they have been used to pump water in arid areas and supply power, conventional electric power, to whole villages. They have even been used to power an aircraft. At present they are so expensive that applications are limited, but research and development into cheaper materials, cheaper manufacturing techniques and mass production should bring about a steady drop in the cost of solar cells; the cost of other forms of energy is clearly going to go on increasing, so solar cells will become steadily more attractive and their applications will become more numerous. Another interesting possibility is the satellite solar power station. In space there is no weather and no night, so solar cells could be operated continuously; the intensity of the radiation is about 40% higher than it is on the surface, as there is no atmosphere to absorb the incoming radiation. A collecting area of 10 square kilometres could collect a total of 13000 megawatts of sunlight and generate about 1600 megawatts of electricity. This energy could be used to power a number of transmitters operating in the microwave region which would beam the power down to a receiving aerial perhaps a kilometre across on the surface of the earth. The power received by this aerial would then be converted to mains electricity and fed into a conventional power distribution system. The output, 600 megawatts, is about the same as that of a typical modern power station. There is nothing particularly difficult or exotic about such a satellite power station; most of the techniques required are entirely conventional, and the few which are not have already been tested. The only problems are the cost, which again looks like being about 10 times the cost of a conventional ground based power station, and the risk of interference with radio communications and particularly radio astronomy. We use much more heat than electricity and for many purposes it is far better to use solar heat directly rather than to convert it into electricity and then to use the electricity to do the heating. Non-electronic systems, such as solar water or space heaters, can be significantly improved by electronic control. For example, many solar water heaters use pumps to circulate the water, and it is obviously necessary to start the pump when the solar collector is collecting energy and to stop the pump when it is not. This simple control task can be done electrically or electromechanically, but it is very often more convenient to do the necessary temperature measurement and decision making electronically and then to use an electronic or electromechanical switch to start the pump. But electronic control systems are very flexible, and can easily do much more complicated tasks. For instance, a house might be heated primarily by a bank of solar collectors, with a large tank of hot water as a heat store, backed up by a gas boiler and an electric immersion heater. The electronic controller, based on a microprocessor, could control all of these, taking due account of the outside weather, the incidental gains from other energy sources in the house, and the instructions of the householder. The problems in designing such a system do not lie in the electronics, but in the system designÐspecifying what the controller is to do, choosing a friendly and helpful interface for the user, and keeping the cost down. Wind and water power are primarily sources of mechanical energy, but are often used to generate electricity. The most convenient and economical generators are permanent magnet alternators, but these make a.c. whose voltage and frequency are
64
ELECTRONICS AND ENERGY
proportional to the rate of rotation of the windmill or waterwheel. This output is difficult to use, as a.c. cannot easily be stored, and most electrical devices require a fixed voltage. So most small systems use d.c. generators, or mechanical controllers to control the speed of the alternator. Electronic systems can now do the job better. Rectifiers can convert a.c. to d.c. for storage in batteries, and inverters can turn it back to a.c. of any desired voltage and frequency. Or a.c. of one frequency can be directly converted to another. With small water turbines, one can often do even better. If the demand is low compared with the available water supply, the turbine can be run continuously at full power, eliminating the need for an automatic control valve in the water supply pipe. The speed of the turbine, and hence the voltage and frequency of the output, can be regulated by electronically controlling the load. Any power not required by the primary user is simply diverted to some secondary use, such as heating water, or even thrown away by heating the stream. As a further refinement, in remote areas, the waste electricity might be used to maintain an electrical discharge through air, a well known but inefficient way of fixing nitrogen to make fertilizer. 2.3 Economy and conservation As energy supplies become scarce, it is increasingly important to obtain the best value from the fuels we use. The first requirement is accurate information, which implies the use of electronic instrumentation. Once we know how to improve the efficiency of a particular device, such as a furnace, we can make the necessary modifications. If the requirement is for ªintelli gentº control, a n electronic control system can be designed to do the job. Some of the improvements are not so obvious. It is well known that electric induction motors are quite good converters from electrical to mechanical energy, with efficiencies of more than 50%. But this efficiency is obtained by running the motor at full power, and most motors are not run at full power. A motor on a lathe, for example, will be powerful enough to drive it under the worst possible load conditions, with a safety factor on top. It may never actually have to run at full power, and most of the time it will be running at less than one tenth of that. With this low power output, the efficiency is less than 10%. A small electronic unit in the power supply line can sense the load on the motor, and reduce the drive accordingly, raising the efficiency to somewhere near the original 50%. Applied to a whole factory or machine shop, such devices would substantially reduce the consumption of electricity. Another major opportunity for energy conservation is the substitution of low energy systems for high. This is going on all the time, of course, within electronics itself, where one of the main developments is the reduction of the power requirements of individual circuits. But electronic substitutes for older techniques can also give substantial savings. A good example is in the transmission of ephemeral informationÐthi ngs like stock market prices or football results. These can be stored on computer systems and transmitted over telecommunication links with far less energy than would be required to print and circulate paper copies. This is even more true where the need is for the user to have access to a large file of information, such as world airline schedules, most of which he will never actually want to read. In this case, it is not necessary to transmit most of the information; it can all be stored centrally, and the appropriate sections can be sent on request. Most of this technology is already available; it is called Prestel. Looking ahead a little, telecommunication is cheaper in energy than travel. With the new wide bandwidth optical fibre links, meetings and conferences can be replaced by telephone and television links, with data links for documents. This fits well with the so-called electronic office, where all the work is done on word processors, with data links replacing letters. Indeed, it would be possible to do much of the work from home, saving the commuting travel as well, and greatly reducing the size and energy consumption of the central office. 3. General comments ªIs it true, think you? º As you can see, there are plenty of applications of electronics in the energy field. Electronics is very powerful, especially in the fields of communication and control. The cost is very low for the complexity available. But there is a significant overhead cost for power supplies and transducers that must be paid for even the simplest systems. So one needs to consider the design of the system rather carefully before deciding whether electronics is the right solution. But the situation is worse than this. There is an underlying assumption behind this sort of discussion that things are going to go on in the future much as they have in the past, which is obviously not possible as populations increase and resources are depleted. Moreover, we assume the viewpoint of the developed countries, with plentiful resources and rich, well educated people. We should pay much more attention to the Third World, where most of the people live, and resources and education are scarce.
THE WATT COMMITTEE ON ENERGY
65
So what is the ªObject of the exercise?º Certainly it is not just providing supplies of energy, as these are only the means. The end is to provide goods and services, food, clothing, warmth and entertainment, in a manner that we can sustain for the foreseeable future. We need to take a new look at the design of the system, and electronics can provide the means to do so. Looked at from this point of view, the renewable energy resources become much more important, and the best way to use them is with a multitude of small scale local collecting systems. This takes advantage of the free distribution which is one of their most important characteristics. We should take a fresh look at the things we do, and avoid those that waste resources for no real purpose. Electronics can help here, but only if it is used sensibly, to do more efficiently the things that really need doing, and not wasted doing more and more unnecessary things. It is a sad comment on our civilisation that the biggest use of large integrated circuits is for television games. Electronics gives us the opportunity to do complicated, interesting and useful things; it also offers facilities for unprecedented waste. We must learn to grow up and avoid the five year old child's att itude: ªI ca n do it, so I mustº.
Use of computers in air traffic control with particular reference to evaluation and planning
Dr R.Burford, Software Sciences Ltd Operational Research Society
Use of computers in air traffic control with particular reference to evaluation and planning
Summary This paper briefly notes the use of computers in ATC and then discusses the different methods which are available for evaluating and planning airport and airspace systems giving the advantages and disadvantages of each approach. It aims to show that when dealing with a complex system Fast-Time Simulation (based on the Event-to-Event Simulator Technique) has a significant role to play in determining how a system will behave under increased traffic load or after the introduction of new facilities. By investigating system behaviour, it is possible to identify bottlenecks which create congestion and delay and to plan them out of the system thereby creating significant savings in the use of aviation fuel. The technique of Fast-Time Simulation in Air Traffic Control (ATC) is dependent upon the use of main-frame computers because of the need to manipulate large amounts of data and to process this data cost effectively. The technique, however, does not require the development of any special-to-purpose hardware and hence use can be made of commercial computer bureaux such as BOC Datasolve. Although this paper specifically addresses ATC applications the same techniques and arguments have a much wider application to all aspects of airport and aviation operations and to other transportation systems. Principles of Air Traffic Control (ATC) The expressed aim of ATC is the achievement of ªa safe, orderly and expeditious flow of air trafficº. To achieve this, a complex and world wide ATC organisation has been built up based on the concept of ªcontrolled air spaceº which basically consists of: a) AirwaysÐwhich are equival ent to ªmotorwaysº i n the sky and are corridors of airspace which are internationally agreed. b) Terminal Movement Areas (TMA)Ðwhich are volumes of airspace which surround major airport hubs (such as London) in which aircraft are climbing out of several airports to join the airways system or descending from airways to land at one of the airports within the TMA. c) Control ZonesÐwhich surround a major airport (such as Heathrow or Gatwick) and in which aircraft are sequenced for landing or controlled immediately after departure. The map at Figure 1 shows the principle airways and TMAs in the United Kingdom. Aircraft flying within controlled airspace has first to obtain clearance from air traffic control and must maintain continuous communication with the various air traffic control centres. The latter are responsible for ªsequencingº aircraft using horizontal, lateral or vertical separation to ensure that they do not collide with other traffic. Two different types of control technique are used, one of a strategic, or procedural, nature, based upon statement of aircraft intention updated by aircraft position reports, and one of a tactical nature based upon radar-derived information. When using only the procedural technique, the uncertainty in aircraft position necessitates the use of large separation criteria thus leading to an inefficient use of the available airspace. The radar-based technique which provides frequent (every 10 seconds) and accurate information about aircraft plan position permits closer aircraft spacing but requires significant data-handling and processing equipment. The network of airways is defined by Reporting Points. Electronic Navigational aids are sited at these Reporting Points so that aircraft can establish their position reasonably accurately at the time when they need to report their position to the air traffic control organisation. Thus, the airways and the Reporting Points provide a geographical frame of reference which enables control to be exercised by collating aircraft position reports with reference to the Reporting Points and by allocating appropriate heights and/or delaying aircraft in such a manner that flight paths do not conflict.
68
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
Figure 1 Airways in UK airspace
Fuel consumption Figure 2 gives approximate fuel consumptions of a selection of aircraft currently in service with the major airlines. If the ATC system is not well organised and has to reroute aircraft, vector them (i.e. move them to one side of the airway to maintain lateral separation), make frequent changes to aircraft level, delay aircraft (either by imposing speed control or making an aircraft fly a ªcircuitº ) or make an aircraft fly at a non-optimum level, the effect on fuel consumption can be considerable. For example, a B747 cruising at 29,000 feet will use approximately 6% more fuel than if it was cruising at 35,000 feetÐon a journey such as London to New York, this would result in using some extra 4300 kilos, or 1200 gallons, of fuel per flight. The British Airports Authority's A nnual Report for 1979/ 80 gives the number of air transport movements using Heathrow, Gatwick and Standard as 555,000, the number using the busiest 40 European airports as 3,312,000 and the number using the busiest 48 non-European airports as 9,024,100. If the ATC service could be improved to save just 1 minute per air transport movement, the fuel saving could be in excess of 5 10 8 kilos, or 130 million gallons of aviation fuel per annum or 350 thousand gallons per day (assuming an average fuel consumption of only 2200 kilos per hour).
THE WATT COMMITTEE ON ENERGY
Taxi-ing
Takeoff
Climb
Cruise
Descent
Approach
Concorde
4800
96000
19000
6000
10000
B747±136 B747±236 B707 LI Oil Tristar Trident 3 BAC 111±500 B737
3600 3000 2000 2200 1200 800 1100
31000 41000 18000 20000 10000 6800 7500
36000 (subsonic) 45000 (super) 20000 20000 11000 12000 6700 4300 5500
11500 10500 6500 6600 4000 2300 2200
4000 3200 1700 2200 1200 700 1000
5000 4000 3200 3600 4000 2600 3000
69
Figures are in kilos per hour per aircraft Figure 2 Fuel consumption of large civil aircraft
Use of computers in ATC Such savings are possible by the use of computer systems to aid air traffic controllers in the performance of their duties and to assist in the evaluation and planning of ATC systems. In the former case, dedicated computer systems specially configured and programmed for the work that they are to undertake are required. These computer systems are, for example, used: a) to handle the communication between air traffic centres; b) to process flight plan data and to print flight progress strips (which indicate the time and height of the aircraft over each Reporting Point); c) to extract data from primary and secondary radars, to remove clutter and to transmit radar data in digital form to air traffic control centres; d) to display radar information in a more meaningful way (position information, aircraft call sign, speed, height and statusclimb/descent/cruise) ; e) to warn controllers of potential conflict situations. Computer systems used in this real-time mode of operation have to be designed to operate 24 hours per day, seven days per week, every week of the year, but even so some failures will occur and hence special arrangements have to be made to ensure that sufficient information is available for manual fallback operation or fall back to less sophisticated equipment. The safety aspect of ATC operation require such systems to have duplicated CPUs, duplicated peripherals and duplicated data-bases so that the failure in one component does not cause the total system to fail. Software has to be written which enables the system to be reconfigured around a failed piece of equipment and has itself to be very robust and free of errors and software bugs. This necessitates very extensive testing of a system both off-line and on-line before the system is allowed to go live. No matter how good a special purpose computer system is in providing assistance to air traffic controllers, it will be limited by the long-term strategic decisions made when the choice of ground based navigational equipment, radar equipment, ATC rules and airport location and layout was made. For this reason the remainder of this paper is devoted to the use of general purpose computers in the evaluation and planning of ATC systems. Evaluation techniques A number of evaluation techniques are available to study and measure the performance of airport and airspace systems ranging from experimentation with the real system to the construction of, and experimentation with, a model which represents the total behaviour of the real system. For the purpose of this discussion, six distinct types of model will be considered:± a) The Descriptive Model in which a detailed quantitative explanation of the salient features of the system under investigation is prepared by the use of flow diagrams and written descriptions. b) The Mechanical Model involving the construction of a physical working model of the total system. c) The Manual Model in which pins, for example, are used to represent the position of aircraft on a map of the airport and/or airspace under consideration and are moved according to rules and procedures corresponding to those in the real system. d) The Algebraic Model in which mathematical equations are derived which describe the behaviour of the system.
70
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
e) The Real-Time Simulator in which real equipment is operated by real controllers but where the inputs to the equipment are synthetic. f) The Fast-Time Simulation Model in which the total system, including all hardware and human participants, are described in purely logical terms within a digital computer. Experimentation with real system For an existing system the technique of expert observation of its behaviour and the analysis of specifically collected data can achieve substantial short term benefits both in terms of reduction in aircraft related delay and rationalisation of procedures to increase system capacity. However, such methods of operation analysis are necessarily constrained by the generally fixed nature of the operational environment and by the amount of experimentation which is acceptable to both the system operators and system users. Experimentation with changes to rules and procedures in the real system can result in disruption of the system under investigation and can cause confusion to both operators and users unless handled carefully. Furthermore, the pertinent features of the system are not always subject to control and it is not possible to make experimental studies of the effect of introducing new types of aircraft, navigational equipment, radars, etc., which are still at the design stage. The most significant disadvantage of experimentation with the real system is the requirement that everything must exist. For example, the effect of a new navigational aid requires that aid be developed, calibrated and positioned correctly, the effect of increased demand can only be measured by actually flying more aircraft through the system in a given interval of time, and the effect of a new section of concrete can only be measured after the section has been constructed. Although experimentation with the real-system is not recommended for most purposes, the process of measurement of the performance of the real system is extremely important. It provides information as to the performance of system operators and users, data against which models can be validated and a database of information which can be used for forecasting future system demand. Descriptive Models The formation of a detailed Descriptive Model is an intrinsic step in almost all model building as it describes in detail the operation of the system by use of a series of rules of the type ªIF condition THEN act ion to be takenº . For example:± a) IF a wide bodied jet is followed by a smaller aircraft THEN allow 6 nautical miles separation on final approach. b) IF two aircraft are on crossing tracks THEN ensure they are vertically separated by 1000 ft or pass the crossing point with at least 3 minutes lateral separation. c) IF the departure queue exceeds 6 aircraft THEN increase separation between arrivals from 5 nms to 8 nms. All possibilities should be specified with flow charts drawn showing the logical sequence of processing the aircraft from entry to exit of the system. All rules and action should be stated quantitatively wherever possible and the action of controllers and equipment explained in sufficient detail to enable their effect on the progress of the aircraft to be understood and predicted. As one would expect the construction of a Descriptive Model sheds a great deal of light on the problem to be investigated, results in a detailed understanding of the system behaviour and indicates those areas of the system design which have not been adequately thought through. However, since such a model cannot produce quantitative measures of system performance and capacity, the Descriptive Model must be accepted as only a first, albeit extremely valuable, step in system evaluation. Mechanical Models Most of the systems which require evaluation will have at least 10 aircraft moving simultaneously within the system, with the possibility of each aircraft following a different path and having different performance characteristics. Except for taxiway evaluation the aircraft profile will be three-dimensional, with movement in the vertical as well as the horizontal plane. It would therefore be extremely difficult to physically model an ATC system in the same way that one can, for example, model a railway system particularly since the method of traffic control is continuous rather than the usual moving-block technique used by railways. In theory, it should be possible to collect together several model aircraft enthusiasts to participate in the operation of a miniature airspace system; no doubt it would be very enjoyable but the practicable value is questionable.
THE WATT COMMITTEE ON ENERGY
71
Manual Simulation Models Some of the earliest ATC systems evaluation was carried out by use of manual simulation techniques where, for example, a section of airspace was represented by a large scale navigational chart showing the location of all navigational aids with the various aircraft routes and danger areas clearly marked. At the start of the simulation, labelled pins would be placed in appropriate positions on the map, ensuring that there were no violations of ATC rules. The `clock' would then be advanced by the selected unit of time (say 3 minutes), the new position of each `aircraft' calculated and the pins advanced on the map. A team of qualified controllers would then consider the new position and sort out any ATC problems by repositioning the pins to avoid any violation of specified separation minima. All actions taken by the controllers would then be noted with any delay, vectoring or rerouting that had become necessary. The `clock' would then be advanced by another 3 minutes, say, and the whole process repeated; this would continue until sufficient data had been collected to indicate how the system was behaving. The main disadvantage is that the method is obviously very slow with each advance of the modelled clock taking very much longer in real time (probably a minimum of 15 minutes for each 3 minute step). It depends on a team of people to undertake the calculations, move the pins, identify and resolve the ATC problems and record the results. Although interesting for the first couple of hours, it can quickly become tedious. Because of these factors it is doubtful whether the game could be played long enough to obtain significantly valid results. In the last decade this technique seems to be out of favour, however, it does have some advantages of investigation of developing airspace and airports which as yet do not have a significantly high movement rate and it could also be used to investigate physically small problem areas. The preparation prior to `playing' is minimal, ATC rules and procedures can be modified as the game progresses and, most importantly, it can be played and understood by practising controllers. Algebraic Models An Algebraic Model attempts to describe the behaviour of a system by the use of, for example:± a) Applied mathematics b) Probability theory c) Queueing theory. Unfortunately, when we attempt to look at the behaviour of a system as a whole, the problem becomes too complex to represent in simple mathematical terms and cannot be sub-divided into simpler secondary problems as the various parts of the system are too inter-related. If we wish to construct an Algebraic Model we are therefore forced to simplify the system by making assumptions in order to write down and solve the equations. In practice, there are only a few types of distributions for which we can obtain expressions for average delay, average queue length and probabilities that delay, or queue length, exceeds given valuesÐand frequently these distributions are inadequate in describing the real life situation. Further disadvantages of analytic models are that they soon become too difficult for a non-mathematician to understand, they usually measure the performance of the system in terms of averages over a period of time and normally cannot deal with the short term `overload' situation which so often occurs in real life systems. However, they are of value when looking for rough measures of performance for small parts of the total system where a fairly low level of accuracy is acceptable, for example, in some case of calculating runway capacity or load on a sector. Real Time Simulation Real Time Simulation (sometimes known as Dynamic Simulation) is frequently employed by ATC administrations for both the evaluation of system design and the training of ATC staff. The term ªReal-Tim eº arises from the fact that the simulation process takes the same time to operate as does the real-life system with ªtimeº in the simulation process being measured from a real clock. In Real-Time Simulation no attempt is made to model human participants by mathematical or logic processes and the Simulation Model or Simulator is a physical reality with human controllers performing their normal functions at similar position which represent, as closely as possible, the actual controller positions in the real-life system. However, in Real-Time Simulation, communication/ sensing equipment inputs (e.g. radar returns) are synthesised by electronic equipment and the aircraft pilots ªplayedº by supporting sta ff. The simulated traffic is processed through the simulator following a normal time scale and modified as necessary by instructions from the controllers in accordance with the rules and regulations selected for the particular experiment. Actual operating conditions are reproduced as closely as possible, thus enabling maximum realism to be introduced into the study. This factor is particularly important when studying the ergonomie problems of man/man and man/machine relations within
72
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
the ATC system or when testing/developing equipment prior to use in a real operational environment. The fact that the Simulator has a useful role in training staff has been accepted for a long time by ATC administrations and the use of Simulators for training purposes in other areas outside ATC is increasing. However, Real-Time Simulation does have disadvantages when applied to system evaluation since:± a) the method requires extensive simulator equipment; b) the scale of investigations are limited by the equipment available; c) it takes considerable time and effort to mount an exercise; d) the number and duration of exercises are limited because the simulator runs in real time; e) equipment prototypes are required if the system involves new hardware; f) the simulator requires extensive software; g) experiments are expensive to repeat because the simulator physically changes from application to application; h) a simulator is expensive to buy and run; i) a simulator is usually at a fixed location and is not transportable. Fast-Time Simulation In Fast-Time Simulation an attempt is made to describe the total system including hardware and human participants in a mixture of mathematical and logical terms which can then be processed through a digital computer. It is basically a formalisation and restructurisation of the Descriptive Model in such a way that we can calculate what happens to each aircraft as time progresses and can measure the effect that one aircraft has on other aircraft and the system as a whole and vice versa. The replacement of the human participants in the actual simulation process by purely logical and deterministic mechanisms frees the Simulation Model from having to wait for a controller to think or a pilot to reactÐalt hough these aspects are included and allowed for in the simulation preparation. As the whole of the system is translated into conditional (i.e. IF something THEN do this ELSE do that) and arithmetic terms, full use can be made of high speed digital computers which allow the simulation to operate many times faster than real time simulation hence giving rise to the term Fast-Time Simulation. Figure 3 shows diagrammatically how a Fast-Time Simulation Model is run using the facilities of a computer bureau whilst Figure 4 shows the basic simulation process. Basically, a Fast-Time Simulation Model consists of three parts: a) The ProgramÐwhich contains a complete description of the rules for processing the traffic through the simulated environment expressed in a way which is understandable to the computer (i.e. written in a Computer LanguageÐsuch as FORTRAN). b) The ParametersÐwhich are used to control the application of the rules built into the programs by specifying which set of rules should be applied and stating the values of criteria (e.g. separation minima) which are to be used. c) The DataÐwhich specifies the physical structure of the system and the nature and performance of the traffic to be processed. The Simulation is started by reading the Program into the computer which then reads the Parameters and the Data and creates within the store of the computer a Simulator which is capable of representing the progressive behaviour of the modelled system as time advances. As the simulation continues the computer enters ªaircraftº into the playing area, advances them according to the defined ATC rules and outputs results on the progress of the traffic and the state of the system on a lineprinter, a teletype terminal or a VDU. At the end of the simulation, various tables, graphs and histograms showing accumulated statistics on such measures of system performance as delay, queue length, density of traffic and number of conflict situations, are printed to provide a permanent record of the simulation run. Fast-Time Simulation has the advantages that:± a) the level of detail contained within a model can be controlled with the model being only as complex as the investigation warrants; b) actual system hardware equipment is not required; c) the models are usually relatively inexpensive to construct when compared with Real-Time Simulators; d) the models are normally easy and not expensive to modify; e) running and maintenance costs are low; f) experiments are readily repeatable, even after several months or even years; g) experiments are carried out in a matter of minutes and therefore a wide range of alternatives can be evaluated;
THE WATT COMMITTEE ON ENERGY
73
Figure 3 ATC Fast-Time Simulation Model
h) extensive analysis of system performance is immediately available; i) very large systems can be investigated since the only limit to size is the storage capacity of the computer; j) since the environment is specified by data the same model can be applied to a wide range of similar systems (for example, the same Terminal Control Area Model has been used to investigate the behaviour of the London, Paris, Rome, Lisbon, Miami, Frankfurt and Munich Areas); k) Fast-Time Simulation Models can be easily and very cheaply copiedÐsince all they consist of are a deck of cards or a magnetic tape; l) the Models are portable and can be moved from country to country and computer to computer; m) the models can be run remotely and simultaneously from several offices; n) several Models are available off-the-shelf.
74
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
Figure 4 Basic Fast-Time Simulation Process
The disadvantages are that:± a) since the human element is removed, the Models cannot be used to study detailed ergonomic problems; b) the initial development of a Model can take between three months and a year; c) it is not always possible to include all aspects of ATC procedures and most airspace models do not include full conflict resolution because of the complexity in getting a computer to undertake picture recognition. This latter point is not as serious as it may appear at first sight since the objective in most investigations is to determine and reduce the delay experienced by aircraft and to ensure a safe and expeditious flow of traffic rather than to measure the effectiveness of conflict resolution procedures. The way in which a Fast-Time Simulation Model is constructed is of great importance for the Model should be simple enough so that it can be manipulated, understood and run within a reasonable timescale and budget, representative enough to meet all the objectives of the investigation for which it was constructed or modified, and complex enough to accurately represent the system behaviour to the detail required by the study objectives. If the Model is too simple, it will not model the system behaviour in a way which is representative of the real-life systemÐbut if it is too complex, the Model will become very large, requiring a lot of data preparation and absorbing huge amounts of computer time. Planning a Study Experience has shown that a study involving the quantitative measurement of a system can be divided into a number of basic tasks, where the tasks are to be completed in approximately the given order:
THE WATT COMMITTEE ON ENERGY
75
a) formulation of the problem; b) definition of objectives; c) selection of evaluation technique; d) project planning; e) construction of a descriptive model; f) selection of quantitative criteria for the measurement of system performance and capacity; g) collection of data forecasting of demand; h) formulation of the mathematical model; i) construction of the necessary computer programs; j) preparation of data into the correct format; k) validation; l) specification of alternative values for model parameters; m) computer running; n) analysis of results; o) interpretation and comparison of results; p) recommendations. These tasks are not of the same duration and one task may require different technical skills from several of the others and vice versa. The team required to carry out the study should be composed of mathematical/operational research analysts with detailed knowledge of the modelling technique to be used, operational analysts experienced with the type of system under evaluation and programmers experienced in high level computer languages (such as FORTRAN). In Figure 5, an attempt has been made to illustrate the methodology of studying a system by use of modelling techniques, showing the relationships between the fundamental stages of the study. Before any attempt is made to decide the technique to be used, it is essential to have a clear statement of the terms of reference and the objectives of the investigation since the particular model of the system will vary according to the purpose of the study. At this stage it is also important to determine both the time scale and budget available as these factors could affect the depth of the investigation and the technique selected. Normally it is possible to produce useful results within any reasonable time and monetary constraint but the uncertainty associated with these results and the extent of measures of system performance will vary inversely with these constraints which are themselves highly correlated. The first stage in actually building the Model should be a detailed study of the operation of the real or proposed system which is to be investigated. This is almost always done in close collaboration with those personnel involved in the operation or planning of that system. For such an analysis, detailed descriptions and flow diagrams can be prepared of the various elements of the system. Usually the system elements can be classified under four broad headings:± i) system environment ii) aircraft progress through the environment iii) constraints on the movement of aircraft iv) decision logic employed by the control organisation. Once this has been done, we have a descriptive model of the system which serves to define and isolate the salient features of the system and to specify qualitatively the interaction between the various elements of the system. The level to which this descriptive model descends will be primarily dictated by the choice of modelling technique and the extent of the total system that one is forced to model. For example, if we are modelling a taxiway system it will not be necessary to consider arrivals before they reach the runway or departures after they leave the runway, nor will it be necessary to model passenger and cargo handling. At this stage, the description should also include:± a) specification of the objectives of the system; b) specification of the criteria by which systems performance and capacity should be measured; c) specification of the alternative principles and methods of control which it is desired to evaluate. The next stage in the modelling process is to translate the detailed descriptions and flow diagrams into mathematical/ logical terms. In the case of Fast-Time Simulation, we also need at this stage to select the appropriate Activities and associated Events into which the system processes must be sub-divided for successful application of Event-to-Event Simulation techniques. An Event occurs at an instant of time and causes a change to the system to take place, for example:
76
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
Figure 5 Methodlogy of System Analysis by Fast-Time Simulation
i) warning of entry to the system or part of the system; ii) entry to the system; iii) joining a queue; iv) leaving a queue; v) start of a processing action or movement; vi) end of a processing action or movement; vii) transfer between system elements (i.e. between sectors); viii request for a service (e.g. clearance); ) ix) control intervention; x) exit from the system. Associated with each of these Events are Activities which are system function that usually take time to accomplish and result in a change of status, such as moving from one place to another or the use of a system resource. Examples of such Activities are:± i) clearance to enter the system or part of the system; ii) allocation of departure runway; iii) approach sequencing; iv) holding of an aircraft in a stack;
THE WATT COMMITTEE ON ENERGY
77
v) implementation of flow control; vi) search for potential conflicts; vii) moving an aircraft through a section of airspace or taxiway. The selection of the correct Events and Activities for a Fast-Time Simulation Model is critical to the success of the Model as it affects both the level of realism and the speed of Model operation. Usually once the mathematical model is formulated the next step is to turn it into a computer model which can be input to a digital computer so that all the calculations which would otherwise have to be done by hand can be done automatically. The study stage associated with the analysis and interpretation of results requires some experience since it is concerned with translating the numerical results output by the model into operationally meaningful terms which can be readily understood and digested by non-mathematicians. Once the model has been programmed and debugged, it should be validated by using the model to reproduce known conditions. This provides a check which ensures that all the relevant factors have been included in the model and that the behaviour of the model resembles that of the real system to the level of accuracy required. Following any improvement to the model necessitated by the validation process, the model can then be applied to assess the system performance and/or capacity of the various proposed system configurations. In the case of a Simulation Model, detailed examination of the results will not only evaluate system capability, but will also indicate the areas of limiting capacities, congestion and potential weakness in the system and provide important data which will aid the redesign of the system to improve performance. The timescale for a study of this type will vary on the type of technique selected, the extent of the system and the degree of detail which has to be modelled. At a minimum, it will probably be three months with a team of three people and at a maximum one year with a team of five. However, several steps can be eliminated if a Model has been previously developed and can be applied either directly or with slight modification. In this situation, the timescale for the evaluation of even a complex system can fall to three months or less. The TMA Model As an example of a Fast-Time Simulation Model, let us consider the TMA Model developed by Software Sciences Limited which has been used to evaluate proposed development of existing airports and the siting of new airports in some six countries. The TMA Model deals with that part of airspace (termed the Terminal Control Area) which surrounds a complex of closely sited airports in which arrivals are descending from their cruise level to land at a specified runway and outbound aircraft are climbing to their cruise level following departure. The Model is capable of representing some 25,000 square miles of airspace from ground level up to 40,000 feet and can deal with up to 10 runways distributed between any number of airports beneath this area. Inbound aircraft are simulated from their point of entry to the TMA to the point of turn-off from the runway whilst outbound aircraft are taken from the time of joining the outbound queue just prior to the departure runway to the point of exit from the TMA and overflights from point of entry to the TMA to the point of exit from the TMA. Inbound aircraft are processed through the Model taking into account:± a) separation provided at the TMA boundary by en-route sector controllers; b) requirement to delay the entry to the TMA due to congestion within the TMA or at the runways; c) allocation of holding levels in holding stacks or the provision of in-trail separation between aircraft inbound; d) transition of ATC sector boundaries; e) management of holding stacks; f) restriction in the number of aircraft simultaneously under the control of each Approach Controller; g) the processes of approach sequencing by use of speed control and path stretching; h) the integration of private traffic into the landing stream of commercial aircraft; i) the interaction between converging landing streams; j) the effect of wake turbulence on approach separation; k) the interaction between arrivals and departures on the runway with the possibility of restriction on the inbound stream in the event of serious outbound congestion. Outbound aircraft are processed taking into account:± a) constraints imposed on outbounds due to interacting inbound traffic on the same or on interacting runways; b) constraints imposed on outbounds due to previous departures from the same or interacting runways;
78
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
c) restrictions on departure times due to wake turbulence effects on inbounds on crossing runways and departures on the same runway; d) availability of alternate departure runways; e) hand-over of departure from the Local Controller to a Departure Controller; f) restriction on the number of aircraft simultaneously under the control of a Departure Controller; g) transition of ATC sector boundaries. Aircraft profiles are calculated taking into account the differences in performance of different aircraft types and the imposition of Standard Instrument Departure routes (SIDs) and Standard Arrival Routes (STARs) which involve the use of height restrictions strategically placed to avoid conflicts between different streams of traffic. During the processing of the traffic the Model detects all infringements of ATC separation standards but does not resolve these infringements except in the case of those detected during approach sequencing and departure clearance procedures; these are resolved by the application of delay. Data requirements The environment to be simulated, such as that shown in Figure 6, is specified to the TMA Model by describing the route structure in numerical terms and by the use of parameters to define the runway operation rules, the stack management rules, the approach sequencing rules, the ATC separation standards, the departure procedures, the form of sectorisation, etc. The traffic to be processed is prepared as a traffic sample which gives details of each individual flight and the performance data tables. Since the particular TMA to be investigated is specified by input data, the Model is capable of simulating a wide range of TMAs without having to make changes to the model logic. For any particular investigation it is possible to consider a wide range of system variations due, for example, to changes in :± a) the number and/or siting of airports; b) the number of runways in use at each airport; c) the mode of operation of each runway; d) the route structure design; e) the ATC sectorisation; f) the landing direction. For each of these variations the volume, mix and distribution of traffic can be readily changed in order to determine the reaction of the system to different demand levels and patterns. Model output The TMA Model translates the demands imposed upon the Terminal Control Area and associated airports, expressed in terms of traffic levels, into quantitative measures of:± a) TMA System Loading, measured in terms of the number of aircraft simultaneously under the control of various en-route, arrival and departure controllers and the number of aircraft using each route, or segment of route; b) TMA System Performance, measured in terms of the distribution of delay, the use made of holding stacks and the need to divert aircraft to other runways; c) Airport Performance, measured in terms of departure queues, runway movement rates, arrival and departure delay; d) Complexity of the ATC Task, with respect to the resolution of infringements of ATC separation standards, measured in terms of the spatial distribution of such infringements and the type of infringements; e) Penalties to Operators resulting from system capacity restrictions and the application of ATC rules, measured in terms of aircraft delays, route mileages, flying times and non-optimum flight paths; f) Route Structure Efficiency, measured in terms of excess route mileage flown in comparison with straight line distances from points of entry to points of exit. From these results:± a) the TMA structures can be redesigned to increase system capacity and reduce delay and hence energy consumption; b) alternative TMA systems and policies can be evaluated and compared; c) systems behaviour with increased traffic demand can be evaluated and the increase in delay measured; d) these parts of the TMA restricting capacity can be determined;
THE WATT COMMITTEE ON ENERGY
79
Figure 6 South Florida Airspace Study
e) the effect of the introduction and siting of new airports can be assessed; f) new sectorisation designs can be tried and tested. Model users The TMA Model has been extensively used on real planning problems for more than a decade. Originally developed for the Royal Radar Establishment (UK) to evaluate TMA performance for different radar and navigational system accuracies, it was subsequently used on behalf of the Commission for the Third London Airport to evaluate ATC factors affecting the siting of the Third London Airport. Following these applications, it was used for the Standford Research Institute (USA) and the Royal
80
USE OF COMPUTERS IN AIR TRAFFIC CONTROL
Aircraft Establishment (UK). It was then completely rewritten for a study with Eurocontrol into the possible use of Le Bourget as a general aviation airport when Charles de Gaulle airport came into operation. Subsequently, it was used for the State of Florida DOT to study the siting of a new South Florida Airport, by the Civil Aviation Authority (UK) to study alternative routing policies with Maplin in operation, for BFS (Germany) to study the Munich TMA, for ITAV (Italy) to study the Rome TMA and for DGAC (Portugal) to study the Lisbon area. A descendent of the Model has been developed by Software Sciences Ltd for the Eurocontrol Agency to simultaneously evaluate the performance of air traffic over virtually the whole of Western Europe. This model is currently being used by Eurocontrol to experiment with the concept of airspace scheduling whereby delays to air traffic can be reduced by better coordination between airlines in scheduling flights and selecting aircraft routes to avoid airspace congestion. The next logical step is the introduction of a dedicated computer system which will be capable of assisting aircraft operators in the preparation of their timetables and in fitting in charter and special flights at short notice in such a way as to avoid creating delay for themselves and other airspace users. When this becomes a reality, and it may take a further decade before such a system is operationally acceptable, the reduction of delay will result in a considerable saving in energy consumption. With Heathrow alone handling in excess of 300,000 flights per annum, a saving of 1 minute of delay per flight could save at least 8,500 gallons of fuel each day, every day. Conclusions Fast-Time Simulation has proved to be a very powerful research and evaluation tool in dealing with the complex, sophisticated and dynamic nature of ATC systems. The technique has the ability to cope with problems which are mathematically intractable and which resist solution by other analytical methods whilst at the same time avoiding the potentially high costs, dangers and difficulties of experimenting with the real system. With fuel costs continually increasing and representing a higher proportion of airline operating costs, the need to plan out delay from ATC systems and to ensure that aircraft operate efficiently is becoming more important particularly since considerable savings can be made by better scheduling of flights and improvements to ATC systems. Although this discussion has been centred on ATC systems the techniques described have application to other transportation systems and supply/distribution networks. In fact, any system involving the movement of people, materials or messages from place to place through a network subjected to specific constraints and controlled by human or automatic decision-making elements can be investigated through the use of computer-based models. The advent of cheap computing power has enabled the design and operation of complicated systems to be evaluated with a level of accuracy and within a timescale which is impossible by other means.
Electronic controls for refrigeration plants with particular reference to energy savings and supermarket applications
J.Schmidt, Linde AG With an Appendix by M.Tinsley, BOC-Linde Refrigeration Ltd Institute of Refrigeration
Electronic controls for refrigeration plants with particular reference to energy savings and supermarket applications
1. Introduction Already today we can predict that our century will one day go down in history as the ªage of electronicsº. Space exploration would have been impossible without the building blocks of electronic equipment, and the same may be said of the solution to the most pressing problem of world economicshow to reduce to a tolerable level the explosion in the price of energy. This will involve not only energy saving methods, but also the full use of primary energy. At the same time, we must not forget that the cost of skilled labour has increased out of all proportion, with the result that regularly recurring tasks such as service and installation should be kept to a minimum. Microprocessors lend themselves to such a purpose. Let us recap: In order to avoid a collapse in the world economy, we have to develop with the help of electronics cost-reducing measures which will lengthen the life of our present energy reserves, and utilise manpower more sensibly from an economic point of view. 2. Food markets and methods of saving energy Supermarkets are an important means of providing the population of a country with food-stuffs. The many ranges of products must not only be displayed in an attractive and saleable manner; it is also essential that over 60% of fresh food products be protected against deterioration by refrigeration. The foods are displayed to the consumer in product-orientated refrigerated cabinets. 3. Order of magnitude To give you an impression of the scale and importance of this field of business in the socio-economic sector, here are some statistics compiled in West Germany, but similarly applicable in Great Britain. The total floor-space occupied by the food trade on the 1st January 1981 was about 14,130,000m2. If it is taken into account that the installation cost of sales and store areas is about 450±550 DM/m2, investment in the total area equipped amounts to around DM 7 thousand millions. An additional average figure puts the annual energy consumption per square metre of sales area at around 320 kWh. This means a total of about 4.5 thousand million kWh or at a unit price of about 0.17 DM/kWh, a value of DM 765 millions. To this must be added the cost of supplying the energy. These figures should give you a general impression of the socio-economic importance of saving measures; they have not taken into account oil and gas requirements for space and water heating of the supermarket in general. (See Appendix). 4. Categories of users with respect to different forms of energy Almost all familiar forms of energy, such as electricity, heating oil, gas and remote block heating, are used in the grocery business, depending on a specific choice. The most commonly used form of energy is probably electrical, as a power supply for machines and for store-lighting. As the use of electricity can be readily optimised by specific controls such as increasing, decreasing or switching off, significant savings can be achieved in this sector.
THE WATT COMMITTEE ON ENERGY
83
In this connection, we should not omit to mention the heat produced by condensers, which can be used as additional energy both for round the year water heating or seasonal air heating. Let us examine the significant categories of usage in the food trade, in which electricity consumption can be optimised by the use of electronic equipment. 1) Lighting in sales areas, store rooms, advertisements, parking areas, etc. 2) Cooling installationsÐless through sequential switching off of compressors than through exact regulation. 3) Ventilation and, where required, air-conditioning. 4) Electricity usage in specific departments, in preparation or sales areas. 5) Water heating. 6) Space heating. 5. Tariffs of electricity No general comments can be made about the rates of tariff. In West Germany the system is so diversified that, taking into account the different counties and suppliers, there are over 200 possible different contracts. However, all have in common a maximum demand tariff during peak periods, which enables the suppliers to levy special charges on usage in excess of the agreed maximum. Thus, an oven switched on at the wrong time can frustrate all saving measures. 6. Electronically-controlled optimising switching systems have the capacity to intervene directly in such situations and to prevent such errors from occurring in the first place. They contain in addition a device which reacts to prevent the exceeding of the set maximum loading by shedding loads, making it possible to keep within the required limits. Furthermore, such an instrument can be programmed in advance for periods of a year to perform specific on-and-off switching functions. It is, for example, possible to programme: a) an ambient related air-conditioning and ventilation controller; b) a demand related lighting system such as for car park or advertising using dimming devices; c) electrically-operated energy saving night blinds for refrigerated display cabinets. Even fire protection, security and alarm systems can be embraced to organise the way in which they are used throughout the week, and many more. In the consistency of functioning and reliability, electronic systems for specifically designed tasks have proved superior to man. 7. We cannot recommend energy saving through additional time switches in a refrigeration plant if it is already precisely and carefully controlled by a thermostat. The exception would be a peak limiting device for all usage groups in which the refrigeration plant is included. This must be arranged in a carefully determined sequence, which takes into account acceptable limits for stopping periods of the cooling system. It is important to avoid an increase in the temperature of the products which would be detrimental to their quality. 8. The possibilities, so far outlined in brief, of directly optimising the use of electric current, or indirectly optimising the use of, for example, heating oil, by means of an electronic monitor, can help to reduce significantly energy use in a supermarket. Savings of up to 30% can be achieved, depending on age, condition, equipment, and size of supermarket. It is, of course, important in this connection that all such measures should not detract from the objective of satisfying the customers' needs profitabl y. It would be a bad way of saving energy if a housewife were to overlook a product as a result of reduced store lighting of signs; or if an escalator were stopped on the pretext that the customer can surely climb the stairs. In order to exclude such errors, all energy saving systems must be readily programmable by the users of the store, and must in addition be capable of being overridden manually. 9. We mentioned that except for the use of peak demand switching, well designed cooling systems do not really require additional controlsÐin fact, these can present problems. However, there is a need to optimise energy in cooling systems as refrigerated cabinets, cold stores, air-conditioned areas, and cooled preparation areas, etc., account for up to 40% of the electrical energy used in a food store.
84
ELECTRONIC CONTROLS FOR REFRIGERATION PLANTS
10. Linde Compactronic Thinking along these lines was already underway some ten years ago. Even at that time cost increases could be foreseen, and they were grouped in the following categories: a) Increased price of all forms of energy. b) More stringent legislation with respect to improvement of product quality by means of refrigeration. c) Increase in service and installation costs as a result of the ever-rising price of labour. A response to these propositions was the new control and wiring methodЪLinde Compact ElectronicºÐdeveloped in 1970 and now in its third generation of application. Let us at this stage briefly describe the functioning of the temperature control system in general use today in conventional refrigerated cabinets. In brief: Ð Ð Ð Ð Ð
The refrigeration compressor is installed remotely in a plant room. The refrigerated cabinet is connected to the compressor by copper piping. Temperature regulation and defrosting is controlled through vapour-pressure operated mechanical switches. All electrical control instrumentation is housed in a central panel installed in the plant room. In the event of electric control failure, the compressor is stopped. Safety devices in the plant such as defrost termination thermostats are, as previously mentioned, vapour-pressure operated. Ð Fault-finding and readjustment are time-consuming, because of the various delays in the mechanical action between switchings.
11. The Compactronic differs from this method of operation in significant details. The most important component is a temperature sensor with a negative temperature coefficient (NTC). According to the temperature of the sensor, which is a semiconductor, the resistance changes (Ohms). The resistance values are compared against the requisite value on a Wheatstone Bridge, correspondingly amplified, and converted directly by means of a switch relay into a switching function. The Compactronic is powered by a 24 volt supply, ruling out the possibility of contact fusing. Here are the differences between the Compactronic and a conventional control system. In brief: Ð Fast and exact regulation by means of a highly sensitive temperature sensor. Ð The place where the temperature is measured can be changed by simple relocation of the sensor. Ð Pre-programming ensures in the event of a disruption within the controls, the compressor is kept running in the case of subzero cooling cabinets, and is switched off in the case of above zero cooling cabinets. Ð A factory-assembled pre-wired control box installed in the cabinet. Ð In a row of cabinets the wiring is connected by means of simple plugs. Ð Functional testing, initial adjustment, and setting of controls is done by means of testing apparatus in a very short time, with no further checking necessary. Ð The requisite interlocking of cooling and defrosting is here achieved through electronic components (contactless). You will already recognise in this presentation the significant differences and improvements as against a conventional system of temperature control. Apart from the simplification in installation and service, we can expect energy savings through a preciseÐthat i s to say electronicÐsystem of temperat ure control. 12. In order to gain a further perspective on possible ways of saving energy, it is important to localise the largest energy users in the refrigerated cabinet cooling system. a) The refrigeration plant proper. (Compressors, condensers). b) Electric defrosting. c) Anti-condensation heating. d) Air change and air losses in open self-service cabinets, in particular wall cabinets. e) Heat gain from the environment. 13. Before further energy saving measures may be discussed, it is essential to know in what priority they need to be arranged. A ªpriority indexº ha s been developed, to which all measures are subjected.
THE WATT COMMITTEE ON ENERGY
1st priority 2nd priority 3rd priority
= = =
85
product qualityÐproduc t temperature functional reliability energy savings
You may ask: Why should energy savings come third? We answer: What is the use of energy saving methods, if the functional reliability of a refrigerated cabinet is thereby reduced, and the consequent damageÐruined products-causes a cost many times as great. 14. To complement the Compactronic, a number of additional components have been produced, which can directly affect power consumption. Let us consider first of all a development of demand defrost. We can differentiate today between a number of defrosting systems, all intended to melt the ice built up on the evaporator in the shortest possible time and with the least possible effect on the temperature of the product. In the conventional model, defrosting is initiated by a time switch, usually once or twice in 24 hours. The following methods are available: Ð electric heating Ð hot/or warm gas Ð ambient air Owing to the different sizes of food markets in Europe, single compressor installations still dominate, and multiple compressor installations are not found in such numbers as to warrant, admittedly at higher installation cost, the use of hot gas to defrost the evaporator and the evaporator space. Owing to this situation, electric defrosting is still preferred to alternative systems. It is not always possible to determine the correct time for de-icing by means of a time clock. An optimisation of the energy requisite for defrosting is achieved not just within the framework of cabinet construction. We have developed a supplementary electronic device which is able to determine by means of a melting-time indicator the ice thickness on the evaporator in each refrigerated unit. According to the melting time, it is determined whether up to 3 defrostings can be omitted. Every fourth defrosting must take place to ensure functional reliability. An important aspect is that the defrosting process is initiated as in the past by a time switch. Should the evaporator become iced up more quickly as a result of a sudden increase in the relative humidity of the store, a supplementary hygro-meter can directly intercept the electronic mechanism and bring the defrosting process into operation at the very next opportunity. Up to 13% of the total energy, including energy consumed by the compressor and defrosting, can thus be saved, with complete functional reliability of the installation ensured. 15. In the case of open refrigerated cabinets, a regular air intake of about 30% of the circulated air volume must be reckoned with, apart from direct heat gains from the environment. To cool this air down to product temperature is an energy waste, if one considers that the store is open at most ten hours out of twenty four. The use of a simple night blind brings an average annual energy saving of up to 18%. This figure can be improved further, if it is taken into consideration that because of the control mechanism, the product temperature is necessarily a few degrees lower than needed during the covered period. With a very simple additional component, we have managed to save a further 3±5% in energy through keeping the temperature constant. Furthermore, this version can be operated in conjunction with an electrically-powered blind, as part of the overall energy control of a supermarket. 16. Due to the big temperature difference between the product space in the refrigerated cabinet and the surrounding atmosphere, and changes in relative humidity through the year, the cabinet manufacturer is forced to arrange for heating of various parts of the cabinet to prevent condensation. For this, too, there are devices which switch the heating on and off, depending on the relative humidity of the surrounding atmosphere. With the help of these instruments, up to 50% of the ambient heating energy can be saved annually. 17. Whilst on the subject of the use of electronic building bricks for the purpose of saving costs, we must not forget their use in alarm systems. These are systems which help to prevent loss of products, and form part of the ªCompactroni cº. Apart from the temperature control system, an independent NTC sensor maintains constant surveillance over a predetermined temperature. Temperature-warning signals are delayed to cater for short-time rise in the temperature after defrost. An additional electronic unit permits the connection of up to 8 measuring points to one pilot system. Devices are available which give an acoustic signalÐto be switched off after alarm is soundedÐand which can be mounted on the refrigerated cabinet or in a central unit, for example, in the store manager's office. This control panel can monitor up to 12 warnings and the instrument can be connected to audible alarms, signal lamps, or even into the GPO telephone to contact the manager directly.
86
ELECTRONIC CONTROLS FOR REFRIGERATION PLANTS
18. We have given you a full picture of the opportunities for the saving of both energy and labour in the foodmarket, using electronics wherever possible. We showed you that even in as specialised a field as the food trade, quite new applications for electronic control devices exist. We are, however, aware that we are only at the beginning of a continuing process of development, essential if energy is not one day to be weighed equally with gold. APPENDIX Use of rejected heat In section 3 of the paper, it has been stated that the annual energy consumption/m2 of the refrigerated sales area is approximately 320 kWh. In a typical supermarket, for every kW of driving power used in refrigerated cabinets, some 2±3 kW of heat will be rejected to the surrounding atmosphere. The cold cabinets will also be extracting heat from the supermarket environment and this will amount to between 1 and 2 kW. It follows, therefore, that during the heating season, taken at 240 days per annum, this heat has to be supplied by some form of heating to maintain comfort conditions. In a typical year, the heat to be supplied will be approximately 550 kWh/m2. There is clearly an opportunity to use the heat rejected to atmosphere from the refrigeration plant. Supermarkets usually have space for un-refrigerated goods greater than that refrigerated and thus the overall heating requirement in present-day buildings is greater than the total that can be utilised from the refrigeration. It follows that the objective should be that all heat raised above the ambient temperature in the building be recycled. Thus, there is a two-prong opportunity for saving heat, firstly from the refrigeration operation and secondly from all other sources emitting heat within the supermarket. The table below sets out samples of actual installations by BOC-Linde using direct heating of air, and using the higher levels of heat from the superheated discharge gas for domestic hot water. The illustration shows schematically the system so far most commonly installed. It is worth noting that the pay-back period is rarely greater than 2 years and indeed wherever refrigeration equipment is installed, it behoves manufacturers, consultants and users to ask whether every avenue for the re-use of energy is being explored. INSTALLATION SALES AREA
HOT WATER RECLAIM REQUIREMENT
1 918m2 9881 ft2 750m2 8072 ft2 ±
2 3562 m2 38342 ft2 962m2 10354 ft2 ±
3 666m2 7168ft2 140m2 1506 ft2 46 gal hot water storage tank
INSTALLED NOMINAL POWER (Motor rating)
72 kW
83 kW
34 kW
DAILY ABSORBED RUNNING POWER (18 hours per day)
1192 kW
1413 kW
576 kW
AVERAGE COMPRESSOR RUNNING FACTOR AIR QUANTITY & TEMPERATURE AVAILABLE
0.7 90,000 m3 hr at 26 C ±
0.8 67,968 m3 hr at29 C ±
362,400
446,400
actual 0.7 40,200 m3 hr at 25.5 C 32 gate/hr at 60 C 214,080
£7,248
£8,928
£4,282
£14,000 ± 1.9 yrs
£14.141 ± 1.6 yrs
£4,202 £1,961 1.44 yrs
WAREHOUSE AREA
Results from
HOT WATER AVAILABILITY ENERGY SAVING (kW/hrs) (Based on 240 10 hour days) ANNUAL COST SAVING (2p per kW/hr) ADDITIONAL COSTSAIR HEAT RECLAIM EQUIPMENT WATER HEAT RECLAIM EQUIPMENT SIMPLE PAY-BACK PERIOD
installations
THE WATT COMMITTEE ON ENERGY
87
Wind power applied to domestic requirements and some related electronic problems
Professor A.L.Fawe, Montefiore Institute Li ge University This is a synopsis of a more extensive paper being prepared for The Watt Committee ªEnergy and Electronicsº summer school in September 1981.
Wind power applied to domestic requirements and some related electronic problems
Introduction The purpose of this paper is to describe the potentiality of wind power to satisfy the energy needs of a home at all levels: heat, electricity and motorized travel. The long-range goal of the investigation is a full independence from resources not available in the country itself. Estimate of the energy required per year Based on our own experience, we can state the following: a) Heating The fuel-oil consumption during the 1970±1978 period ranged from 30 to 47 litres per day or, assuming an 80% efficiency, 240 to 376 kWh/day. During the summer, the consumption dropped to 5 to 13 1/day or 40 to 104 kWh/day. b) Electricity The consumption at the present time amounts to 5,000 kWh/year or 14 kWh/day (since it can be assumed fairly constant through the year). c) Motorized travel (also assumed uniformly distributed) At a rate of 36,000 km/year and 5.5 litres of gasoil per 100 km, the requirement is 5.4 I/day or 43 kWh (based on the same efficiency as in a)). kWh/day
Winter
Summer
Heating Electricity Car Total
240 14 43 297
40 14 43 97
In the above table, we quote the lower values for heating since it seems wiser to use, for instance, a wood/coal furnace in the living-room rather than a larger wind turbine generator to make the difference. The conclusion is that we need 300 kWh/day in the winter and three times less in the summer. Fundamentals of wind power The density of wind power is given by where p=1,25kg/m3 v=wind speed (m/sec) However, by Betz's theorem, at most
can be converted.
The efficiency of the device itself ranges from 30 to 70%. Assuming 50%, the available wind power will be
90
WIND POWER APPLIED TO DOMESTIC REQUIREMENTS AND SOME RELATED ELECTRONIC PROBLEMS
Figure 1
Figure 1 compares this result (curve 1) with the power density collected by a commercially available 10 kW wind turbine generator (curve 2). Statistical properties of the wind Histogram of the wind speed at two meteorological stations 5 give the percentage of each cell in the average of
close to the site are shown on Figures 2 and 3 *. Figures 4 and
where f (v) is the wind speed probability density. These figures prove that a cut-in wind speed of 3 m/sec and a rated wind speed of 9 m/sec are appropriate. Figure 1 shows that is a slight upper bound of the yearly average power with an actual device. For these two sites
On the other hand, available data on the average wind speed through the year show that the ratio of winter power to summer power is of the order of 2.5±3. Wind speed through the day peaks from about 10 a.m. to 5 p.m. Therefore, wind power is specially well matched to our energy requirements. From the above data, we may assume (which means about 17±20 W/m2 in the summer and an average of 33±35 W/m2 through the year) and estimate the size: or a wind length
THE WATT COMMITTEE ON ENERGY
91
Figure 2
Figure 3
Deriving electric power for domestic appliances Since the power required for heating is of the order of 10 kW (depending on the house and the requirements of the users), it appears that the wind turbine should drive a three phase alternator. The same can be said from the safety point of view. Therefore, the block diagram of the system will be (in the present state of the art)
*Measurements are made at 24m, the results are upper bounds of pa’
92
WIND POWER APPLIED TO DOMESTIC REQUIREMENTS AND SOME RELATED ELECTRONIC PROBLEMS
Figure 4
Figure 5
Figure 6
Due to the variations of alternator voltage, the battery charger will not be the standard one (designed for a 220V main supply).
THE WATT COMMITTEE ON ENERGY
93
Figure 7
Particular attention is given to the following straight-forward solution: Indeed, the voltage as well as the frequency drop with the wind speed and the DC current remains fairly constant. The price of the storage batteries is kept at a minimum in the present context where heating is the major requirement. Indeed, provided priority is given at any time to the domestic appliances, the required power is available when i.e. about 91% of the time. Standard batteries could be used. A 3kW-DC/AC converter has been designed, sufficient to power a single large appliance at a time. A second converter will look after all the other loads. Industrial converters are not appropriate for our goal: efficiency from 55% to 85% at full load, large power consumption at zero-load, voltage close to a very stable sine wave not required for home applications, and finally price. The designed converter has a high efficiency at all loads and is low-priced thanks to currently available components. The brain is a TDA 2640, an integrated circuit frequently used in television, to control the power transistors which feed a 48/220 V transformer.
THE WATT COMMITTEE ON ENERGY
GENERAL OBJECTIVE The objective is to promote and assist research and development and other scientific or technological work concerning all aspects of energy and to disseminate knowledge generally concerning energy for the benefit of the public at large. TERMS OF REFERENCE The Watt Committee on Energy, being a Committee representing professional people interested in energy topics through their various institutions, has the following terms of reference:– 1. To make the maximum practical use of the skills and knowledge available in the member institutions to assist in the solution of both present and future energy problems, concentrating on the UK aspects of winning, conversion, transmission and utilisation of energy and recognising also overseas implications. 2. To contribute by all possible means to the formulation of national energy policies. 3. To prepare statements from time to time on the energy situation for publication as an official view of The Watt Committee on Energy in the journals of all the participating institutions. These statements would also form the basis for representation to the general public, commerce, industry and local and central government. 4. To identify those areas in the field of energy in which co-operation between the various professional institutions could be really useful. To tackle particular problems as they arise and publish the results of investigations carried out if suitable. There would also, wherever possible, be a follow-up. 5. To review existing research into energy problems and recommend, in collaboration with others, areas needing further investigation, research and development. 6. To co-ordinate future conferences, courses and the like being organised by the participating institutions both to avoid overlapping and to maximise co-operation and impact on the general public. EXECUTIVE COMMITTEE-Chaired by Dr J.H.Chesters, QBE: As at June 1981 Mr M.Anthony, Institution of Mining and Metallurgy
Dr J.D.Lewins, Institution of Nuclear Engineers Mr G.K.C.Pardoe, Royal Aeronautical Society Mr C.W.Banyard, Treasurer, Institute of Cost & Management Mr W.B.Pascall, Royal Institute of British Architects Accountants Mr H.Brown, Institution of Plant Engineers Dr J.M.W.Rhys, Society of Business Economists Professor I.C.Cheeseman, Chartered Institute of Transport Dr P.A.A.Scott, Royal Society of Chemistry Mr A.Cluer, Institute of Petroleum Mr J.M.Solbett, Institution of Chemical Engineers Professor A.W.Crook, Institution of Mechanical Engineers Professor J.Swithenbank, Institute of Energy Dr W.C.Fergusson,P/ast/cs and Rubber Institute Mr G.Victory, Institute of Marine Engineers Mr R.S.Hackett, Institution of Gas Engineers Dr F.Walley CB, Institution of Civil Engineers Professor D.O.Hall, Institute of Biology Mr J.G.Worley, British Nuclear Energy Society Dr D.Hutch inson, Combustion Institute Mrs G.Banyard, Secretary Dr A.J.Jackson, Institution of Electrical Engineers
THE WATT COMMITTEE ON ENERGY
95
Note: 1. A part of the executive rotates on an annual basis at 30th April each year. The following institutions were members of the executive for the year shown:
1979/80 Association of Home Economists Chartered Institution of Building Services Institute of Physics Institution of Municipal Engineers 1980/81 Institution of Electrical & Electronics Technician Engineers Institution of Production Engineers Royal Institution Royal Institution of Chartered Surveyors Royal Society of Arts Royal Town Planning Institute
Miss W.Matthews Mr C.Izzard Professor F.J.Weinberg Mr H.D.Peake Dr G.F.Reynolds Dr J.C.McVeigh Sir Peter Kent Mr K.W.Bailey Mr T.Canted Mr F.J.C.Amos
2. Professor J.E.Allen, Royal Aeronautical Society, is Adviser to the Business Planning Committee. Sir Peter Kent, Royal Institution, is a member of the Business Planning Committee.